Skip to content

Navigating Generative AI: The Good, The Bad, and The Ugly

Daniel Strain 23-Nov-2023 17:08:54
Background shows ChatGPT navigation menu; foreground shows Chat GPT response and human finger interacting with it.

As we approach the year's final stretch, we decided to glance in our rear-view mirror as we reflect on some of the most notable tech happenings of 2023. In this article, we’ll be taking a deep dive into Generative AI and the very recent Bletchley AI Safety Summit.

We’ve also packed in some potential risks and unintended drawbacks that recent AI developments pose and the subsequent response we have seen from some of the biggest brands on the planet.

So, strap yourselves in as we, the Netitude technology experts, aim to guide and inform you of the tech trends and happenings worth knowing about in 2023 so far.

Generative AI

It’s safe to say that Generative AI entered the tech landscape in 2023 with a bang. Productivity is the pillar that props up and makes Generative AI such an exciting proposition for businesses across each industry globally.

If you haven’t used or at least heard of ChatGPT, you’ve probably been living under a rock. The Chat Generative Pre-trained Transformer (GPT) logged 100 million users in January 2023 – a feat accomplished two months after its launch date, making it the fastest-growing consumer software application ever.

These revolutionary forms of artificial intelligence blur the lines between original human creations and pieces of text, music, and digital imagery that machines have generated.

The rate of growth is staggering. Merely four months after its conception, OpenAI (ChatGPT creators) released a more enhanced version in GPT-4. Unlike its previous iteration, access to GPT-4 will cost you $20 per month (£16.05).

Opting for the pricier offering will allow you to communicate via images or voice and even create images. OpenAI has also claimed that ChatGPT-4 is “40% more likely” to generate factual responses.

Generative AI is set/has proved to be a game-changer for smaller-sized businesses. The offerings of the likes of ChatGPT will help these companies close the gap to the larger conglomerates as they can increase their capabilities across the board if leveraged correctly.

Small businesses can use Generative AI to improve operational reliability, boost internal and external communications, and enhance customer experience.

Therefore, it can come as no surprise that the technology has been swept up at such a rate. 

What’s more, Chat GPT 3.5 and 4 are just the tip of the iceberg in terms of generative AI. A complete and comprehensive realisation of the technology’s benefits will take time. Consequently, the potential risks associated with the tech must be outlined and negated accordingly.

The 2023 Bletchley AI Safety Summit

November was kicked off with nothing short of a technological and diplomatic breakthrough as the UK hosted the first global AI Safety Summit.

The summit was set up to gather world leaders and technology experts to get them all in the same room, bang their heads together, and make sure that the notion of AI technology doesn’t run away with itself before we figure out the potential repercussions.

This exclusive event included world leaders (China, Germany, USA) alongside the leading tech organisations (Google, Microsoft, Meta) and also boasted attendees of the likes of Elon Musk (tech genius and notable billionaire).

UK Prime Minister Rishi Sunak hailed the event as a significant diplomatic achievement, as it resulted in the creation of an international declaration (a legal decision made by a court of law, or the act of making a decision), committing the EU and 25 countries in attendance to acknowledge and tackle AI-associated risks.

After the announcement of the declaration, it was also agreed that the United Nations would champion an expert panel on AI, similar to that of the Intergovernmental Panel on Climate Change. This is a pivotal step in ensuring that major tech companies collaborate and communicate with governments globally to test future AI models before and after release.

It certainly was a groundbreaking event and something that would have given people in the tech industry much-needed peace of mind and reassurance with all the uncertainty that has circulated AI technology since its inception.

However, it will take more than one summit to clear up all the grey areas surrounding AI technology. It will be a gradual and time-consuming process that will take global collaboration and cooperation. Although, the Bletchley AI Safety Summit is undoubtedly a step in the right direction.

Disinformation and Deepfakes

Some of the most notable tech-related changes are set to revolve around machines getting better at impersonating humans. The breakthroughs in Generative AI have resulted in the rise of deepfakes (synthetic media digitally manipulated to display someone’s likeness convincingly).

 With upcoming elections being pencilled in for the US (November 2024) and the UK (January 2025), there is a general feeling of unrest and uncertainty amongst people working within political fields, as deepfakes can result in disinformation that could have harmful consequences for their respective campaigns. 

Deepfakes and disinformation can be used to fuel Information Warfare (a concept that involves exploiting information management and communication technology to pursue a competitive advantage). That means fellow electees and party members can harness AI technologies to spin false and harmful narratives around their fellow candidates. 

 AI Hallucination

Humans turn to Generative AI when they need a desired outcome to an obstacle or issue. This can be potentially dangerous as, in some cases, AI programs which have been incorrectly configured can “hallucinate” their responses. AI hallucinations are considered to be hallucinogenic as they tend to generate results that are false or tainted. 

The amount of people turning to AI to resolve their issues or fact-check their queries is growing exponentially. Therefore, it’s crucial that Generative AI users realise the implications of using this technology. The word of AI should not be taken as gospel. 

 A Collective Response: Content Authenticity

Forbes recently stated that combative measures are set to be enforced and used against the deluge of disinformation and deepfakes. They have hailed 2024 as “The Year of Authenticity”, believing that content marketers will need to arm themselves with deep, authentic human connections and empathy in a bid to usurp AI technology.

Believe it or not, customers and clients crave real and relatable experiences and interactions with other humans. People want to interact and engage with content that feels relatable and genuine. A more authentic approach will go a long way to evoking emotions, building trust and fostering a sense of community between your business and its customers. 

What’s more, in recent times, we have seen initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) and Adobe’s Content Authenticity Initiative (CAI) created to hold up content creation standards across multiple industries. The former is backed by media heavyweights such as Adobe, BBC and Microsoft, whereas the latter is endorsed by tech juggernauts Intel and Sony. 

Initiatives such as these are leading the fightback against the deception and disinformation that comes hand-in-hand with AI technology. They aim to provide transparency as to how a piece of digital content is created or changed – which is critical to ensuring whether or not we can trust the source. 

 Ignorance is (not so) Bliss

It’s hard to ignore the huge impact and technological shift that Generative AI has brought about with the rapid growth of applications like ChatGPT in 2023. In 2024, it seems the technology is set to continue developing at the same trajectory, bringing even more potential benefits and pitfalls for businesses and world leaders to contend with. 

One thing is certain: keeping up with these revolutionary changes is crucial. Failing to keep up with AI puts you at risk of being left behind technologically and opens you up to the potential exposure of deepfakes and disinformation.