SUBSCRIBE NOW

SIGHT

Be informed. Be challenged. Be inspired.

Essay: Data literacy – fuel for a human future

Digital marketing concept, Business technology, Mobile payments, Banking network, Online shopping, Hand touching global network and data customer connection on dark blue background.

In the first part of a two part article, MAL FLETCHER, social futurist and chairman of London-based thinktank 2030Plus, says the rise in the use of digital data in our work, rest, relationships and play means we urgently need to equip ourselves with enhanced skills to handle it…

London, UK

Data, it is often said, is the currency of the future. 

Actually, trust is that currency. Trust in how data is collected and shared by machines and trust in our own capacity to analyse data through logical and critical thinking.

Digital marketing concept, Business technology, Mobile payments, Banking network, Online shopping, Hand touching global network and data customer connection on dark blue background.

PICTURE: ipopba/iStockphoto

The future will increasingly be shaped by decisions we make using online data and by the artificial intelligence machines that feed on that data.

If the data can’t be trusted, or we have no confidence in our ability to analyse it, we’ll never hold AI to account or produce innovation that improves life on earth.

“We urgently need to equip ourselves and emerging generations with enhanced skills to deal with all things digital, in a way that enhances our humanity.”

This is a matter of concern. While most of us rely on digital data to work, rest, relate and play, we are functionally illiterate when it comes to reading, understanding and interpreting it.

We urgently need to equip ourselves and emerging generations with enhanced skills to deal with all things digital, in a way that enhances our humanity.

Yes, there are encouraging signs that we may be getting a little more data savvy. Meta has just decided to join Twitter in using subscription services on its social media platforms.

These BigTech behemoths are losing money. They can no longer rely on revenue from advertising, which is declining because user numbers have dropped. People are becoming more discerning about where they upload private data about their preferences, opinions and lives.

Yet, for many people, there is still an abiding suspicion that technology is developing quickly in ways they can’t understand, much less influence. This contributes to what Alvin Toffler called “future shock”.

Change is inevitable and sometimes revolutionary. But when revolutionary change happens on many fronts at once, people grow anxious.

Multiple studies over the past twenty-five years have shown how deeply everyday engagement with digital tools can negatively impact mental health.

Yet there are still very few opportunities for people to learn data literacy skills. These would help them navigate the bewildering world of big data, which drives so much of our interaction with technology.

Big data analysis is one of the most profound results of the digital revolution. It brings many benefits which most of us now take for granted.

At the macro level, it helps governments predict how policies will affect economies and the impact of climate change on regions and industries.

It informs the development of vaccines and other medical treatments. It helps urban planners design new streets and envision smart cities.



Data also drives novel technologies like the one behind the much-lauded ChatGPT. The acronym stands for generative pre-trained transformer.

Contrary to its popular image, there is nothing really “intelligent” about this technology. It is mainly a sophisticated aggregator of material already created by human minds.

What is impressive is that it researches quickly and produces output in fluid, almost human-like ways. There may be great benefits with this technology, in fields including education and research – and it’s data-driven all the way.

Perhaps even more significant is the fact that data fuels “machine learning”. By analysing huge swathes of online information, networks of computers identify patterns and anomalies in the data. From this, they spot patterns and infer rules for behaviour.

This allows them to improve their programming, teaching themselves to carry out complex tasks in more efficient ways.

For all of its benefits, though, big data sets alarm bells ringing in some quarters. The warnings sound most ominously in growing debates about “open” versus “closed” data. Each of these positions is now promoted via a date in the global calendar.

Earlier this month – on Saturday, 4th March, the UK celebrated International Open Data Day. Its supporters advocate that many, if not all, forms of data should be accessible to everyone.

There may be some controls over the re-use of data, they say, but our focus should be on freeing information so that people can learn from and innovate with it. Medical studies, climate reports and records of government decisions are prime targets for Open Data advocates.

Data Privacy Day, celebrated in January, highlights the need to tighten our safeguards on personal information.

Advocates of both positions agree on at least one thing: we must all become more aware of data’s place in our lives.

Data literacy needs to play a larger role in our schools and workplaces and in the wider society. This is especially true for emerging generations, who will face the greatest opportunities and challenges with technology.

While there are plenty of data science courses for tertiary students, there appears to be very little on offer for young secondary or primary students.

Becoming more data literate would help people of all ages to understand the implications of the collective oceans of data we generate every day.

Most of us, I think, rely on digital tools so much that we assume the technology behind them is inherently benevolent. We hardly think about who develops the platforms we use. We don’t stop to ask what their motivations might be, or what they intend to do with the data we give them. Or, even more concerning, how that information might be used by AI and ML.

In this decade, we will rub shoulders more and more with artificial reality, machine learning and human-machine synthesis such as brain implants. Plus tools that nobody’s imagined yet.

We must focus now on preparing ourselves for the totality of that digital experience.

“Data literacy programs would help us understand the relationship between data collection and artificial intelligence.”

The unreal world of real data
Data literacy programs would help us understand the relationship between data collection and artificial intelligence.

The capacity of machines to learn is a subject of interest and concern for experts and novices alike. Data analysis drives the development of AI and will continue to fire the engine of machine learning for decades to come.

Artificial intelligence, feeding on cloud-based data, helps us build predictive models for everything from natural disasters to wars and pandemics.

That said, though we don’t yet know the full capacity of machines to learn, two things are already clear. Faulty or incomplete data can produce very troublesome AI and one of the major sources of faulty data is the human machine.

In 2019, an MIT study showed that facial recognition algorithms developed by Amazon, Microsoft and IBM had higher rates of error when identifying people with darker skin tones. Another study by Stanford and the University of Washington in 2019 discovered that AI could be biased against people with disabilities.

AI systems can be infected with the prejudices of their human programmers as well as biases within online data, much of which also originates with human beings.

In the long run, our technologies shape us. (Consider how social media have coarsened our public discourse.) Long before that, though, we shape our technologies.

OpenAI, the home of ChatGPT and one of the world’s leading AI companies, is a case in point.

TIME magazine recently reported that poor workers in Kenya are employed by OpenAI as “data enrichment professionals”. These people label flawed data on the internet that could pollute OpenAI’s programming through machine learning. For this intensive labour, they are paid just two dollars an hour, at most.

Until now, little has been known about their situation. That’s partly because Big Tech wants to project an illusion of AI as being a kind of miracle, a wonder that evolves without human agency. It’s a very useful illusion for companies that need to attract billions of investment dollars. But it potentially reduces their responsibility to provide proper oversight.

This story reminds us that behind the bells-and-whistles curtain of many high-tech tools, there sits a human wizard, sweating away for a pittance, while others rake in huge profits.

Data literacy courses would teach people of all ages how to research the Big Tech platforms they use, looking at their ethics and actions.

Another weakness of AI is the fact that its algorithms are vulnerable to attacks and security breaches, which can cause them to malfunction or make inaccurate predictions.

What’s more, some AI models are now too complex for humans to interpret unaided. This makes it difficult for us to identify potential flaws.

Fortunately, the future is not simply a product of the technologies we develop. It is at least as much a product of how we, as moral agents, choose to use those tools and equip ourselves to do so.


We rely on our readers to fund Sight's work - become a financial supporter today!

For more information, head to our Subscriber's page.


Teaching data literacy
Future human choices will be shaped by the innate traits of generations who carry the future on their shoulders. For this reason, it’s vital to train today’s Generation Alpha children and young teenagers to be data literate.

Data literacy training often involves skills in visualisation – the analysis and creation of graphs, mindmaps and infographics that illustrate links between facts.  They (literally) help us see “the big picture”!

Data literacy projects often teach the basic principles and mechanics behind software development, or coding. They help us understand how the different cogs in the AI machine work – its algorithms and bots, for example.

They also encourage an appreciation for logic and the sequential thinking that underlies computer coding.

Data literacy helps us understand the social implications of data-driven tools.

We often assume that because digital tools form the wallpaper of our lives, most of the technology behind them is benevolent. We hardly think about who develops the tools we use. We don’t ask, what are their motivations and what do they intend to do with our private data.

When I read a newspaper, I try to remember that it represents an organisation with its own internal culture, a set of preferences for thought and behaviour. Big Tech companies also operate according to internal cultures, which either enhance or pollute their output.

In their case, though, information is also more directly shaped by the biases of customers or users.

Social media apps

Social media apps. PICTURE: Adem AY/Unsplash

That’s especially true with social media platforms. Taken together, they represent the world’s largest repository of human opinion.

Every day, 500 million messages are uploaded to Twitter. The average TikTok user – mainly GenZs, but ageing upwards – spends almost an hour a day on the app.

Most social media users would benefit from learning how to fact-check information sources.

This becomes all the more important when we consider the impact of social media on AI and ML. In drawing data from the social media well, AI is exposed to enormous reservoirs of, at best, questionable information.

The transmission of ideas on social media is hugely impacted by the levels of emotion they engender. Social media is shaped by what I call the “hot response culture”. In expressing their views online, many people favour emotive messages over the more measured and reasoned variety.

This is partly because emotion inspires emotion. Many studies have shown what common sense perhaps already suggested: when a social media message moves people to feel something, they are more likely to answer it, reply to it or share it.

A few years ago, Facebook found itself in hot water when it was shown to have conducted a psychological study involving its users, without their knowledge. Facebook’s ethics were appalling, but the study’s findings were illuminating.

They showed that social media users tend to respond most to messages that inspire strong emotions like envy or anger.

Emotion is contagious, but as any reputable psychologist will tell you, it needs to be informed by reflection, logic and reason. Anything else produces mental illness, including, at the extremes, psychosis.

It takes cool-headed detachment to distinguish between sound and flawed reasoning. Data literacy encourages people to adopt a calm and measured approach to data analysis.

Let’s get ethical!
Data literacy also involves training in the ethical use of technology. Modern technology tends to develop faster than the codes of ethics we need to guide its use.

A former British Prime Minister recently called upon world leaders gathered in Davos to move us one step closer to global governance. Tony Blair advocated the launch of a global digital database of the vaccinated – and, by extension, the unvaccinated.

This, he said, was necessary to tackle potential future pandemics.

He argued that global spreadsheets would be necessary for other areas, too, so we might as well get on with building them now. In effect, he called for a hugely expanded application of existing data technologies, with some troubling possible outcomes.

Global databases that record private choices carry huge ethical challenges, regarding data privacy, for example, and the protection of citizens’ data from hackers and fraudsters. There is also the threat of technology creep, where the public approves limited use of a tool only to find that it is later used in more invasive ways.

“If we don’t question the ethics of technology, we will build a world in which ultra-pragmatism is the dominant technological philosophy.”

Global digital databases raise concerns regarding human rights. In parts of China, governments are experimenting with a social credit system that measures whether individual citizens act in government-approved ways. Those who do not lose privileges afforded to more compliant individuals.

Mr Blair’s idea also raises questions about global governance. Global databases require global administration, which must then answer to global lawmaking bodies. In the end, national governments are required to cede powers to global entities. The link between the citizenry and policy-makers becomes more tenuous. In the process, democracy arguably suffers.

The lack of public debate in the aftermath of Blair’s suggestion shows just how poorly we understand the power of data and its potential for misuse.

Just two weeks after his Davos announcement, Mr Blair joined with former Foreign Secretary William Hague to urge the launch of digital identities for all British citizens.

The proposal raises all the same flags as global vaccine databases. These former politicos know they can push for unprecedentedly invasive uses of technology because the public is data illiterate.

If we don’t question the ethics of technology, we will build a world in which ultra-pragmatism is the dominant technological philosophy.

We will simply accept that if a thing can be done, it should be done.

Check back tomorrow (23rd March) to see part two of this article.

mal fletcher

Mal Fletcher is a social futurist, social commentator and speaker and the chairman of 2030Plus, a London-based thinktank. He has researched global social trends for more than 25 years and speaks to civic leaders worldwide about issues relating to socio-cultural ethics & values, PESTLE Analysis, civic leadership, emerging and future technologies, social media, generational change and innovation. First published at 2030Plus.com. Copyright Mal Fletcher, 2023. 

Mal Fletcher is a member of the Sight Advisory Board.

Donate



sight plus logo

Sight+ is a new benefits program we’ve launched to reward people who have supported us with annual donations of $26 or more. To find out more about Sight+ and how you can support the work of Sight, head to our Sight+ page.

Musings

TAKE PART IN THE SIGHT READER SURVEY!

We’re interested to find out more about you, our readers, as we improve and expand our coverage and so we’re asking all of our readers to take this survey (it’ll only take a couple of minutes).

To take part in the survey, simply follow this link…

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.