Government Collected Data and It’s Uses

Oct 24, 2019
0

Big data is a general term for the massive amount of digital information being collected from all sorts of sources collected into databases using various techniques. 2.5 quintillion bytes of data added each day to these databases with the majority of the world’s information collected in the last five years and with only 10% of that data being refined. 90% of the data is raw and unstructured just sitting there waiting to be analysed. This mammoth amount of big data, specifically gathered by means of online methods, mined by the government harbours the ability for businesses to use for a strategic or financial benefit. The government has begun to derive insight to help support business decision making in real-time from multiple sources, including the web, biological and industrial sensors, video, email, and social mediums. 

Many white papers, journals, articles, and business reports have proposed ways governments can use big data to help them serve their citizens by providing businesses with an information reservoir that can be accessed with the desired outcome that businesses can profit from the information and stimulate job creation. This would be a motivation for the government as lower unemployment rates would mean a greater pool of people and profitable businesses to tax ensuring a healthy revenue stream to maintain social services. Like with any new uncharted advantage; however, there are many things to consider such as security. Any form of personal information is subject to classification and risk to a person’s privacy. With greater risk that the information gathered can be used to impersonate an individual more complex methods of new technology will need to be used for security purposes to protect the big data.

It can be argued that highly sensitive personal data shouldn’t ever be made available for a business; however, with proper precautions and fees in place, it would seem like a wasted opportunity not to. 

 

 

Business and  Government Compared

The private sector is well known for making decisive decisions especially if it will maximise the profits in the short-term. Whereas the public sector is known to take time making decisions that will affect its citizens in the long term. For that reason to reduce risk and increase the efficiency and effectiveness of a decision. It follows that big-data applications likewise differ between public and private sectors.

In business, the main goal is to earn profits by providing goods and services to satisfied customers. The government’s main goal is to maintain the welfare civil order of its citizens through sustainable services. Although the primary missions of businesses and governments are not supposed to be in direct conflict with one another, data that is gathered by the government, for public sector benefits, can be a for the private sector business strategy. 

Big-Data Applications in Northern Ireland

Northern Ireland sadly has an imbalance of public versus private sector work where it currently only sits at 40% of the sum total. This large public sector is subsidised by the taxes collected by the rest of the United Kingdom. It is therefore in Northern Ireland’s best interest to encourage private sector growth as much as possible in the form of government-funded initiatives.

These initiatives realise market research is critical to a companies success. They can help small and large businesses to plan and execute marketing activities effectively by providing in-depth information to help break into new markets. An example of a government initiative with the express intent on gathering data would be Invest NI in Belfast. Their headquarters has extensive market research and worldwide company directories that can be used for free to a point. Their range of business information includes details of trade fairs, guidance on import/export procedures, funding sources and legal agreements, companies by sectors, their owners and telephone numbers.

 

E-zine Issue 2

Sep 30, 2019
0

Data Set project partner meeting no.2 – Spain

Sep 18, 2019
0
Partners pose with the DataSet logo

From the 30th to the 31st of May, 2019, the DataSET project consortium paid a visit to the birthplace of Cervantes, a World Heritage Site which transcends the 16th century spirit through its early Renaissance architecture – Alcala de Henares, Spain. With a warm welcome from the representatives of Universidad de Alcala, the consortium held the second transnational partner meeting to share the progress and discuss the future of the project.

The two-day programme provided the participants with the opportunity to discuss the status-quo of the development of the intellectual outputs, dissemination efforts and plans for the sustainability of the project.

Output 1 – VET Guide to Data Skills Development

We are delighted to announce that the first output of the project – the VET Guide to Data Skills Development has been finalized. The representative from the partner, coordinating the creation of the guide (Sonia Naiba, Momentum) presented the final version of the guide to the consortium. The guide aims to raise the awareness of the value of the data skills for current and future entrepreneurs and to analyze what contemporary data skills are already known to the entrepreneurs and business advisors. In addition, the guide summarizes the strategies for teaching data skills to entrepreneurs, including the best practices.

Specifically, the guide gives the overview of the following:

  1. i) The results of a data skills survey, outlining the current skills and skills deficits of business trainers and advisors in participating countries;
  2. ii) Review of the policy environment regarding data skills for entrepreneurs and data skills education, at both EU and national and regional level;

iii) An introduction to strategies for teaching data skills to entrepreneurs, including best practice examples and testimonies.

The guide is in open access and can be downloaded here

Output 2 – DataSet Open Education Resources

The DataSet Open Education Resources will comprise of a curriculum, trainer’s guide and a suite of interactive online learning materials which enable teachers and trainers to enhance entrepreneur’s data skills in classrooms and small group training. The Leading partner of this intellectual output, Universidad de Alcala and its representative (Miguel Ángel Sicilia Urbán, a Data Scientist himself), presented the draft methodology for the data skills training model and led an extensive discussion on the DataSET curriculum with the consortium. Resulting in almost finalized draft, all the partners are to pilot test the curriculum in small group session with business advisors and entrepreneurs in fall 2019.

The training model and its accompanying resources will be testes during the Train the Trainers Learning Activity in Denmark to be held in April 2020.

Output 3 – DataSet Online Course

In the meeting the partners discussed the preliminary plans of translating the open educational resources, created as a part of the previous DataSet output, into an online interactive learning course for the entrepreneurs at all stages of their business development. Self-paced and open to virtually anybody, the course will open the opportunities for popularizing the acquisition of data skills among wider European (and international) population.

In addition to the talks about the intellectual outputs, the partners have briefly discussed the future arrangements for the learning week, dissemination and exploitation plans and administrative issues regarding the project’s implementation phase.

Overall, the meeting was a success in terms of its outcomes, supported by the generous hospitality of the host partner. The partners enjoyed the traditional central Spanish cuisine and its famous tapas and a view of a historic city center of Alcala de Henares.  The next partner meeting is planned to take place in Leitrim, Ireland on 10-11 October, 2019.

EU Data Policy-Making: from Open Access to Developing Data Economy

Aug 28, 2019
0

The development of computer technology and digitalization made it possible to mine and store a massive amount of data. It allows businesses to identify new trends which can be used to make better decisions and seize new opportunities.

Big data has become a powerful driver for economic growth, competitiveness, innovation, job creation and societal progress. The EU aims to reap the full benefits of “big data fever” and maintain steady growth of Digital Single Market. According to the recent study (http://datalandscape.eu/study-reports/second-report-facts-and-figures-dataset), the value of EU big data was more than €376 billion in 2018, accounting for 2.6 % of the EU GDP. With time-bound policy measures and favorable legal conditions, the value of the EU data economy can more than double by 2025 and represent more than 6 % of the overall EU GDP (http://datalandscape.eu/study-reports/second-report-facts-and-figures-dataset). Big data brings new opportunities and the EU is acting fast to bring predictable rules of games to the table.

 The need for decisive steps and concrete actions firstly emerged in 2003 when the re-use of open public sector information (PSI) became legal for commercial and other purposes (https://ec.europa.eu/digital-single-market/en/open-data). It established a minimum set of rules and the beginning of ‘open data’ era. Transparency and fair competition became key components of the ‘PSI Directive’ (Directive 2003/98/EC) https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:32003L0098. The focus of this initiative was primarily economic and it had a great impact on the further development of new services based on novel ways to combine and make use of public sector information (PSI). Open data policy came into force.

The next big stepping-stone was the launch of the EU strategy ‘Towards a thriving data-driven economy’ in July, 2014 (https://ec.europa.eu/digital-single-market/en/news/communication-data-driven-economy). It was acknowledged that “data is at the centre of the future knowledge economy and society” and an action plan was adopted. It was based on the following pillars: community building and developing framework conditions for the single EU big data market. Fostering open data policies, E-infrastructure, Internet of Things (IoT) and personal data protection issues were covered in the new EU strategy on the data-driven economy (2014). A special attention was paid to the development of relevant data skills and infrastructures to the benefit of SMEs (https://ec.europa.eu/digital-single-market/en/news/communication-data-driven-economy).

Developing a common European data space and economy, the EU faced three main obstacles to data mobility within the EU in 2017-2018. Among them, unjustified restrictions to free flow of non-personal data from Member States, legal uncertainties and lack of trust from main actors. According to the survey (2014) (https://ec.europa.eu/eurostat/statistics-explained/index.php/Cloud_computing_-_statistics_on_the_use_by_enterprises ), 38 % of SMEs in the EU-28 lacked trust in data mobility due to security risks. As a result, the Commission organized a number of public consultations with stakeholders to address the existing issues. As a result, the EU adopted a set of measures (outlined in Communication on “Building a European data economy”) to make a free flow of non-personal data across borders available and regulate new data technologies in terms of data access, portability and liability. https://ec.europa.eu/digital-single-market/en/policies/building-european-data-economy). Data privacy regulation was also put on the policy agenda, resulting in the adoption of GDPR and ePrivacy legislation. These measures were designed to enhance digital trust and protect the data privacy for EU citizens (https://ec.europa.eu/digital-single-market/en/policies/online-privacy).

The big data agenda transformed from an open access data issue to building a common European data economy throughout the past 15 years. Even though the efforts of EU policy-makers were intensified, the central concern has been still the same: the EU can lose its competitive advantage of a Digital Single Market if it does not build a data-friendly regulatory framework. European SMEs have a risk of losing the competition in global markets if they have limited access to data analytics and data. Financing and the relevan data skills are particularly big difficulties to SMEs that do not have enough resources to invest in the data infrastructure and analytical tools. The EU should take a more proactive approach to support digital transition and further develop the European Data Economy.

Image credit: pexels.com

Written by Aleksei Simonov, UIIN

The New VET Guide to Data Skills Development – Momentum

Aug 8, 2019
0

Context

In today’s digital, connected world, the savviness with which entrepreneurs employ information and communication technologies is essential to competitiveness. However, while digital communication skills have improved across the population generally, the ability to leverage information, especially data, is still underdeveloped. This is a lost opportunity: the volume of data that business owners have access to has grown exponentially and if “big” data is turned into actionable “smart” data, it can drive productivity, innovation and growth.

The EU states that “data-driven business models are the engine of Europe’s growth, industrial transformation and job creation”, which is part of its commitment to the digitalization of the economy.

One of the benefits is that businesses responding to smart data can improve products and services, which would, in turn, generate economic growth while contributing to social progress. However, micro-enterprises and SMEs, which make up 99% of businesses, still lag in digital technologies. Micro-enterprises and SMEs must develop data skills or risk being uncompetitive, if the European economy is to flourish.

Nevertheless, there is an obstacle: today’s entrepreneurship teachers and trainers also face a data skills deficit. The majority entered the workforce before big data existed and there is currently no reliable source of training to help them boost their own skills. Prior to the start of the Data set project, East Belfast Enterprise conducted a small survey across 28 Local Enterprise Agencies in the Enterprise NI Network which found that “52% of business advisors said they were completely unaware of the range of data that is available and 70% rated their own knowledge of data skills as poor.”

About the VET Guide

The objective of the Guide is to raise awareness regarding the value of data skills for current and future entrepreneurs and increase knowledge of what contemporary data skills are and how they can be taught.

The Guide presents a comprehensive introduction to the role of data skills in VET and includes the results of a data skills survey, outlining the current skills and skills deficits of business trainers and advisors in participating countries, a  review of the policy environment regarding data skills for entrepreneurs and data skills education, at both EU and national and regional level and an introduction to strategies for teaching data skills to entrepreneurs, including best practice examples and testimonies.

Needs Analysis Assessment

The basis of the VET Guide is a needs assessment, which is a systematic process for determining and addressing needs, or “gaps” between current conditions and desired conditions or “wants” of a specific group. The chosen method for conducting the Data Skills Needs of Business Trainers and Advisors in Ireland, Northern Ireland/UK, Spain, Netherlands and Denmark was an Internet survey. This method was selected because it allows for a more diverse survey sample as survey link was widely shared online, it is a low-cost, fast and efficient method and the extensive networks of the partners allowed for a ready-made pool of participants.

The survey was made up of 12 short questions, it had a 100% completion rate and it was completed by 33 Business Advisors from 5 countries (Ireland, Northern Ireland/UK, Spain, Netherlands and Denmark).

Needs Analysis Survey Results

Data Skills proficiency is quite low among business advisors, with only 21% of those surveyed feeling their skills are proficient. The acquisition of Data Skills is of great importance to business advisors. 81% of those surveyed indicated that they would be interested in receiving/accessing free training and/or practical resources that they could use to teach entrepreneurs and SME owners about applicable data skills to their businesses.

Business Advisors today favour a Hands-On approach to providing business support therefore our data set materials should be very practical in nature and be solution oriented. Five key areas were identified where business advisors need upskilling with regard to digital skills and also 5 key areas which are particularly relevant for SME’s  – these are ranked in order of importance in the table below:

Data Skills for Business Advisors Data Skills for SME’s and Business Owners
Data/Information Analysis Application of Data to solve problems/inform business ideas
Reporting Skills Communication Skills
Application of Data to solve problems/inform business ideas Data/Information Analysis
Data Collection Creative Thinking

 

Technical/Digital Skills Technical/Digital Skills

 

The VET Guide also includes a section that goes in depth with regard to the Policy Environment regarding Data Skills for Entrepreneurs and Data Skills Education in the UK, Ireland, Spain, the Netherlands and Denmark, as well as a section dedicated to Strategies for Teaching Data Skills to Entrepreneurs.

The main goals of the VET Guide to Data Skills Development are to raise awareness of the value of data skills for business advisors and entrepreneurs and approaches for the delivery of data skills training and to lay the foundation for the data set Open Education Resources, which will consist of a curriculum, trainers’ guide and suite of interactive online learning materials which will enable teachers and trainers to enhance entrepreneurs’ data skills in classroom and small group training as well as for the data set Online Course, which will consist of a multilingual, interactive learning course in which entrepreneurs at all stages of entrepreneurial activity can learn more and put data skills into practice.

More detailed information regarding the findings of the online survey are available in the VET Guide, which can be accessed in its entirety on the project website.

Data Skills Development Initiatives

Jul 26, 2019
0

Benefits of Big Data for Business Organizations

There are many small and large business organizations that are making use of big data to derive a lot of amazing benefits. Below are a few examples of how gathering data and using already gathered data can help a business in different stages of its development.

Risk Analysis For Start-Ups

 Start-up companies owners have a lot of things to consider when they are building their business. Sometimes it can be all too easy to become laser-focused on just the day to day running of the business when there other important factors that should be considered if they are to reach set business goals. “Big Data” can easily help with a start-up business assess their social impacts, economic importance and their accomplishments.

Data gathering can be as simple as analyzing peer-reviewed journals, newspapers, social media feed and surveys. The information already gathered by these sources can allow a business to avoid common problems and save money.

Product or Service Re-Development

 Are you currently running a business? Then you may be surprised to learn that data collection, or “Big Data” can be of great importance to you. You may already know about the importance of feedback and be able to use it to benefit your business, but collecting the information is only one aspect of what you could be collecting to help your business. Big Data can help you figure out how customers perceive your product or service and in turn, you will be able to revamp or redevelop your product or service to improve its reputation. Designing the product around the needs and wants of your base clientele is a key to a successful business.

Product or Service Expansion

Consumers simply know what they want. They may not always know what they need, but they will always know what appeals to them and where their priorities lay. It is because of this understanding that they will do their own research before they commit to a product or service. They will read reviews and watch Youtube videos regarding the product and potential comparison products after maybe seeing an advert that piqued their interest.

With “big data”, different business can profile customers in a far-reaching manner. Business owners should be aware of these review and comparison sites to check the various desires as well as the preferences of their customers. Based on it, they can offer the products and services tailored to their customers. They can also do a cost analysis of the customer’s desires to see if a product or service is viable in the first place.

Targeted Marketing & Data Management

Once you have gathered your data in the form of customer feedback, emails, names, social media account insights and customer addresses; you may wish to employ computer-aided software to begin advanced information breakdowns for marketing purposes. For instance, you may wish to pinpoint a where on the map the majority of your clients come from, the gender split or simply which social media platform is predominantly being used by your clientele. All this information will allow you to create tailored marketing campaigns especially if your marketing budget is limited.

Staff members should be trained on how to gather data as a standard and about “big data” management, which is not as simple as you think and there can be large penalties associated with mismanagement of customer data. Where “big data” is sensitive to the identity of a client or customer, such as financial, medical or criminal records, data must be encrypted and protected from public access. By this, we mean to say that this sort of data should be kept offline and in no way accessible from a cloud source. Such breaches of privacy can be costly to any company and in some instances mark the end of business entirely due to the fines incurred.

Methods of Analysing Data

Jul 14, 2019
0

Implementing a business intelligence software in your company or organisation is more than simply about collecting additional data; it is about making this data work for you in the form of actionable information. The amount of data an organization can collect today from a variety of sources offers is staggering. The ability to see what’s behind the curtain, understand what campaigns or actions are working can help a business owner prepare for future trends.

However, without having a proper understanding of the data that is collected, all you those figures, numbers and statistical social insights become substance with no context.

It should be noted that there isn’t anyone correct method for analysing data. It is primarily dependent on the needs of a business and the data they aim to collect that will dictate which methods of analysis will best suit and even at that, techniques can be fluid to deliver the best results. In saying that, there are some tried and tested methodologies that are built into different software because they do work.

The first step in choosing the right data analysis technique for a data set begins with understanding what type of data it is. It can either be quantitative or qualitative data. Someone may ask what exactly is the difference between the two? Quantitative data deals mostly with the volume of information. It is the cold, hard facts that can only be brought in by the numbers. For example, the number of sales, click-through rates (CTR) on online marketing campaigns, return on investments (ROI) and other such measurable figures that can be scrutinised objectively.

Qualitative data is a little more ambiguous and nebulous, but no less valuable. It deals more with being subjective, needs a degree of introspection and very much open to interpretation. This sort of data would apply to things like customer reviews, surveys and even watching for customer and staff interactions with a product or service. It is about the quality of how a product or service is being perceived and this makes methods of analysing this sort of data a bit more difficult since it can often be less structured.

Below are three different methods that can be used to measure quantitative data and two that deal with qualitative data to hopefully might help a business begin to categorise and breakdown the data they may already hold.

Measuring Quantitative Data

Regression Analysis

In order to understand regression analysis fully, it is imperative to understand the following terms:

Dependent Variable: This is the main factor trying to understand or predicted.

Independent Variables: These are the factors that can be hypothesised have an impact on the dependent variable.

Whereas you can only have one dependent variable, you can have a myriad of independent variables. Regression analysis is the measurement of the independent variables to the dependent variable and when understanding the relationship to the dependent variable can be used to help a company make a prediction about future trends with a good degree of confidence. Regression analysis is a reliable method of identifying which variables have an impact on a topic of interest. The process of performing a regression allows you to confidently determine which factors matter most, which factors can be ignored, and how these factors influence each other.

Hypothesis Testing

Also known as “T Testing”, is used to infer the result of a hypothesis performed on sample data from a larger population. The test tells the analyst whether or not his primary hypothesis is true. For instance, a business owner may assume that more hours of work are equivalent to higher productivity. How exactly would they know this to be factually the case? Before implementing longer work hours, it’s important to ensure there’s a real connection to avoid spending money on wages or even whether the longer hours would even be received well by staff. A business decision made without knowing how it may affect staff may have the opposite effect where productivity may drop.

Monte Carlo Simulation

Monte Carlo simulation is a technique used to understand the impact of risk and uncertainty in financial, project management, cost, logistics and other forecasting models. A Monte Carlo simulator helps one visualize most or all of the potential outcomes to have a better idea regarding the risk of a decision. To test a hypothesis or scenario, a Monte Carlo simulation will use random numbers and data to stage a variety of possible outcomes to any situation based on any results. It will allow the analyst to understand what random variables can throw a monkey wrench in a company’s project or strategy.

Measuring Qualitative Data

Content Analysis

Content analysis is a research technique used to make replicable and valid inferences by interpreting and coding textual material. By systematically evaluating texts (e.g., documents, oral communication, and graphics), qualitative data can be converted into quantitative data.

Narrative Analysis- Narrative analysis is a genre of analytic frames whereby researchers interpret stories that are told within the context of research and/or are shared in everyday life. This might include interpreting how employees feel about their jobs, how customers perceive an organization, and how operational processes are viewed. It can be useful when contemplating changes to corporate culture or planning new marketing strategies.

Your Data in the Turmoil of Brexit

Jun 1, 2019
0

International transfers of personal data are instantaneous and constant. Everyday business functions such as uploading data files to the cloud or sending emails potentially involve transferring personal data across international borders. This is particularly relevant in today’s global economy where business functions are often outsourced overseas for operational and cost efficiencies. Following Brexit, the UK will be a ‘third country’ for the purposes of international transfers of data under the GDPR, which could have serious implications on the practicalities of legally transferring personal data from the EU to the UK.  We would like to  examine the possible outcomes of the on-going Brexit negotiations on the transfer of personal data from the EU to the UK.

 

Deal or No Deal

On 14 November 2018, the UK government published a draft withdrawal agreement (governing the terms of the UK’s departure from the EU), Article 71(1) of which anticipates a transition period for the continued application of EU data protection law (i.e. the GDPR) for the processing of the personal data of individuals resident outside the UK, provided that the personal data: (a) was processed under EU law in the UK before the end of the transition period; or (b) is processed in the UK after the end of the transition period on the basis of the withdrawal agreement.  However, the continued application of EU data protection law just about only backs up the status quo that personal data may only be transferred to third countries (such as the UK after Brexit) if the European Commission has provided that country with an adequacy decision or, in the absence of an adequacy decision, either certain safeguards are adopted in relation to the transfer or a specific derogation can be safely counted on.

There was potentially a silver lining to this situation. Article 71(2), the transitional arrangements referred to above will fall away if the Commission makes an adequacy decision essentially acknowledging that the UK’s processing of personal data provides a satisfactory level of protection to EU-based individuals. Meaning there will be little to no disruption to businesses since there is no need to rely on safeguards in order for personal data to be transferred from the EU to the UK.

The big BUT; however, is it is not a guarantee that the UK Parliament and the leaders of the EU will come to an agreement before March 29th and even if there was a sort of deal, it is very possible it will not be permanent. If an adequacy decision ceased to apply for any reason then Article 71(3) requires the UK to “ensure a level of protection of personal data essentially equivalent to that under EU law…” That may sound as if the EU is forcing the UK comply by their rule and in ‘Borg’ like fashion assimilate you want to or not, but the fact is UK already have the GDPR incorporated into its own Data Protection Act laws since 2018. Domestically, the UK may have more wiggle room as to how that law can be enforced after the withdrawal.

On 25 November 2018, a summit of EU leaders unanimously approved the terms of the draft withdrawal agreement. However, on the 15th of January this year, the UK Parliament failed to approve the withdrawal agreement, resulting in a historic defeat for a proposition by any PM. This unfortunately makes the idea of a No Deal Brexit very close to a reality given that the deadline for Brexit is the 29th of March this year. The Commission has expressly stated that the adoption of an adequacy decision is not part of its contingency planning. EU member states do not have the power to unilaterally grant adequacy decisions to third countries as approval from representatives of all EU member states is required.

A no deal Brexit therefore suggests an extended period of reliance on the safeguards and derogations referred to above in order to legally transfer personal data from the EU to the UK. Reliance on these measures to govern all transfers of personal data from the EU to the UK is likely to be cumbersome in practice, partly given the rigid nature of the Standard Contractual Clauses (SCCs) and the magnitude of the task presented by establishing legally sound Binding Corporate Rules from a time and resource perspective.

 

What you can do now.

There is no way to be certain that the UK Parliament will go forward with the deal as it currently stands so the best thing to do is to proceed as if there were no deal in place. This means reliance on the safeguards and derogations in order to legally transfer personal data from the EU to the UK. If your business is reliant upon such data transfers from the EU, it would be advisable to consider putting in place contingency plans for a no deal Brexit by preparing for the use of appropriate safeguards and/or derogations.

 

The blog article provided by: East Belfast Enterprise, the UK

Image Credit: www.freestock.org

Edison: Building the data science profession

May 12, 2019
0

One of the largest job advertisement website Glasdoor has identified Data Scientist as the top job in their listings for three consecutive years. However, while the demands for Data Scientists is on the rise, we experience quite a noticeable gap in understanding Data Science as a profession, let alone having a comprehensive training schemes for data scientists. To address this gap, The EDISON Project set an aim to contribute to understanding and building the data science profession through creating EDISON Data Sciecne Framework. In this interview, we share with you the first-hand insights from the Project Director and Senior Researcher at System and Network Engineering Group, University of Amsterdam Yuri Demchenko.

Can you briefly summarize the ideas behind EDISON project? What are its main outcomes and its relevance to a data-fueled economy?

The EDISON project (2015-2017) was focused on coordinating and supporting activities to foster creation of the Data Science profession in Europe (and beyond) that involved interaction with multiple stakeholders from academia, universities, standardisation bodies and professional organisations. The main outcome of the EDISON project is the EDISON Data Science Framework (EDSF) that includes the following components:

  • Data Science Competence Framework (CF-DS),
  • Data Science Body of Knowledge (DS-BoK), and
  • Data Science Model Curriculum (MC-DS), and Data Science Professional Profiles (DSPP).

The EDSF provides a conceptual basis for the Data Science Profession definition, targeted education and training, professional certification, organizational capacity building, and organisation and individual skills management and career transferability.

The definition of the Data Science Competence Framework (CF-DS) is a cornerstone component of the whole EDISON framework. CF-DS provides a basis for the Data Science Body of Knowledge (DS-BoK) and Model Curriculum (MC-DC) definitions, and further for the Data Science Professional Profiles definition and certification.

The CF-DS incorporates many of the underpinning principles of the European e-Competence Framework (e-CF3.0) and provides suggestions for e-CF3.0 extension with the Data Science related competences and skills. The CF-DS and DSPP have also adopted and intend to comply with the structure of European ICT Professional Profiles and European Skills, Competences, Occupations (ESCO) Framework. Corresponding information is provided in both documents CF-DS and DSPP.

This presented Data Science Competence Framework definition is based on the analysis of existing frameworks for Data Science and ICT competences and skills, and supported by the analysis of the demand side for Data Scientist profession in industry and research. The presented CF-DS Release 3 is extended with the skills and knowledge subjects/units related to competences groups. The document also refined the Data Science workplace) skills definition that includes the Data Science professional skills (Acting and thinking like Data Scientist) and the definition of the general “soft” skills often referred to as 21st Century skills.

Currently EDSF is maintained by the EDISON Community initiative (coordinated by University of Amsterdam) with the Github working area.

What are the target professional groups for the EDISON project implementation? Is there a variety of professional roles, domains and uses for which EDISON is applicable?

The Data Science Professional Profiles (DSPP) defines the whole set of professional profiles related to Data Science, Data Management and Governance, and Data Stewardship. DSPP defines 22 profiles from desk and support workers to enter data to Big Data infrastructure management, Data Science and Analytics professionals, and organisational management profiles (e.g. Chief Data Scientist, Chief Data Officer, etc.). The EDSF is also applicable and provides a set of tools to define other Data Science and Analytics (DSA) enabled professions in other science, industry and business domains and sectors.

How can EDISON be extended or adapted for particular or specific uses?

EDSF has a modular organisation and all documents are extensible with continuous work in progress and regular releases. Extensibility points are defined for each of document:

• Data Science Competence Framework (CF-DS),
• Data Science Body of Knowledge (DS-BoK), and
• Data Science Model Curriculum (MC-DS), and Data Science Professional Profiles (DSPP).

We are currently running the call for the contribution to the next release 4 to be issued with the deadline 30 September 2019. Check EDISON community to read more: EDISON community

Would you imagine EDISON (or a subset of it) as a basis for training entrepreneurs that seek to startup in the domains of Big Data and business analytics?

One of tasks in the future/ongoing EDSF development is to define the DSA (Data Science and Analytics) training profiles for managers of the data driven companies. Recent research and developments created tools and methodologies to created tailored curricula based on required professional profiles and competences/skills gap defined based on individual or team benchmarking.

The EDSF contains also definition of the Data Science workplace skills (also called transversal skills) and 21st Century skills that are widely applicable for data driven companies, Industry 4.0 and digital transformation.

Are there some competences and skills in EDISON that are essential for the business aspects of data-intensive companies? Possibly, there are skills that are essential for managers for understanding their own capabilities, cost and business implications?

There are example of using EDSF for different domains. A number of currently running projects use EDSF for different research and business domains:

• ELIXIR, RItrain, CORBEL – Bioinformatics Research Infrastructure
• MATES – Digitalisation of Maritime Industry
• FIRsFAIR – definition of the Data Stewardship curriculum

Many other projects are influenced by the EDISON methodology and EDSF conceptual model. [Conclusive, the skills and competences from the EDISON Framework are applicable in a wide range of fields and relevant for starting entrepreneurs].

Can you briefly tell us on the future roadmap of EDISON?

[The University of Amsterdam (UvA) team, initial EDSF developer, will work as an interim coordinator and faciltator with the view to create the community delegated coordination group that will oversee wider EDSF development and implementation.

Participation in the EDISON/EDSF initiative and Open Source project is open to any party who can contribute with the framework development, implementation, promotion and sponsoring or funding.

The github project serves as a hub for all future activities on the EDSF development, call for contribution and search for new funding and/or sponsorship.

The content of the wiki will grow with the time and will integrate the EDISON project legacy including the DataSciencePro community portal

Interviewed by: Miguel-Ángel Sicilia Urbán, The University of Alcala

Data Skills Landscape: A Dutch Case

Apr 24, 2019
0

The interest in and the need for data skills training among non-data savvy folks have been gaining their momentum for the last couple of years. The Netherlands, being the fastest growing data hub in Europe, have made a tremendous leap in breeding credentialed data scientists at their universities, and are now striding to equip in-service specialists from both public and private sectors with relevant data skills.  Here is an overview of the Dutch institutions that are tirelessly working on creating and actively sharing knowledge in data science.

Contextual reality

The Netherlands was the second country in the world to connect to the Internet in 1989. Since then, the country has been maintaining top positions in digital infrastructure quite consistently. To uphold country’s frontrunner reputation in the digital world, the Dutch government highlights the importance of data science and data skills development in their policies.

In the recent Dutch National Research Agenda, one of the explored routes ‘Responsible use of big data’ connects data specialists with a wide group of stakeholders, making it clear that the use of big data have to become a commonplace practice. The Agenda also emphasizes the importance of big data for every sector of the Dutch economy. Complementing the National Research Agenda, Strategic Agenda for Higher Education and Research 2015-2025 recognizes the importance of developing ‘information, media and technological skills’ among university students including  the competences related to working with big data.

Moreover, in the Human Capital Agenda ICT of 2015, the Dutch government stresses the importance of data literacy by encouraging students’ awareness and interest in big data and cloud/cyber security, together with creating more regional centers of expertise that would deal with data-related issues.

Alongside with the Dutch government, the Coalition for Digital Skills and Jobs in the Netherlands is working on ensuring that all citizens have access to digital literacy initiatives. The coalition sees the inclusion of digital skills into education curricula as a base element of a digitally literate society. So far, there are a few initiatives, such as CodePact, Geef IT Door, and Dutch Digital Delta launched in partnership with the Dutch Coalition, that are aiming to upskill students and other citizens’ in data science and ICT.

The more, the stronger

The Dutch data skills landscape is manifold. It includes national funding bodies and policy makers (e.g. Netherlands Organization for Scientific Research), as well as organizations with the responsibility for providing IT services to academia (e.g. SURF). In response to the ever growing amount of data, the landscape is getting enriched with data science research centers that develop state-of-art data applications (e.g. Netherlands eScience Center). Undoubtedly, universities and their joint initiatives are the primary providers of certified data specialists (e.g. Jheronimus Academy of Data Science). Yet, since the demand in data savvy professionals is getting stronger, the niche in training is being filled with other agencies that provide tailor-made educational services in data-related skills acquisition (e.g. Xomnia).

The institutions, presented in the table below, are, by and large, united under the umbrella of the same purpose – development of data science and proper utilization of data tools in the Netherlands. They provide a wide range of services, incl. training, research, networking, corporate solutions, and more. The institutions are either based in the Netherlands as a national agency (e.g. Centrum Wiskunde en Informatica), or a part of a wider international network (e.g. Growth Tribe). Some of them collaborate on the shared projects.

 

For UIIN VERSION

Though being a frontrunner in digital infrastructure and having designed impressive policy instruments as well as initiated a good diversity of science centers, the Netherlands still have a way ahead to develop data literacy skills among their citizens. To shorten the journey, the deficit in training has to be wiped out. For this purpose, the Data SET project is set to improve the knowledge and skills of entrepreneurship education providers, e.g. VET colleges, enterprise agencies, local authorities and universities, in understanding and delivering relevant data analytics skills to early-career entrepreneurs.