What is AI, and how do programmes like ChatGPT and DeepSeek work?

Artificial intelligence (AI) has increasingly become part of everyday life over the past decade.
It is used for everything from personalising social media feeds to powering medical breakthroughs.
But as big tech firms and governments vie to be at the forefront of AI's development, critics have expressed caution over its potential misuse, ethical complexities and environmental impact.
What is AI and what is it used for?
AI allows computers to learn and solve problems in ways that can seem human.
Computers cannot think, empathise or reason.
However, scientists have developed systems that can perform tasks which usually require human intelligence, trying to replicate how people acquire and use knowledge.
AI programmes can process large amounts of data, identify patterns and follow detailed instructions about what to do with that information.
This could be trying to anticipate what product an online shopper might buy, based on previous purchases, in order to recommend items.
The technology is also behind voice-controlled virtual assistants like Apple's Siri and Amazon's Alexa, and is being used to develop systems for self-driving cars.
AI also helps social platforms like Facebook, TikTok and X decide what posts to show users. Streaming services Spotify and Deezer use AI to suggest music.
Scientists are also using AI as a way to help spot cancers, speed up diagnoses and identify new medicines.
Computer vision, a form of AI that enables computers to detect objects or people in images, is being used by radiographers to help them review X-ray results.
What are generative AI programs like ChatGPT, DeepSeek and Midjourney?
Generative AI is used to create new content which can feel like it has been made by a human.
It does this by learning from vast quantities of existing data such as online text and images.
ChatGPT and Chinese rival DeepSeek's chatbot are two widely-used generative AI tools. Midjourney can create images from simple text prompts.
So-called chatbots such as Google's Gemini or Meta AI can hold text conversations with users.

Generative AI can also be used to make high-quality videos and music.
Songs mimicking the style or sound of famous musicians have gone viral, sometimes leaving fans confused about their authenticity.
Why is AI controversial?
While acknowledging AI's potential, some experts are worried about the implications of its rapid growth.
The International Monetary Fund (IMF) has warned AI could affect nearly 40% of jobs, and worsen financial inequality.
Prof Geoffrey Hinton, a computer scientist regarded as one of the "godfathers" of AI development, has expressed concern that powerful AI systems could even make humans extinct - a fear dismissed by his fellow "AI godfather", Yann LeCun.
Critics also highlight the tech's potential to reproduce biased information, or discriminate against some social groups.
This is because much of the data used to train AI comes from public material, including social media posts or comments, which can reflect biases such as sexism or racism.
And while AI programmes are growing more adept, they are still prone to errors. Generative AI systems are known for their ability to "hallucinate" and assert falsehoods as fact.
Apple halted a new AI feature in January after it incorrectly summarised news app notifications.
The BBC complained about the feature after Apple's AI falsely told readers that Luigi Mangione - the man accused of killing UnitedHealthcare CEO Brian Thompson - had shot himself.
Google has also faced criticism over inaccurate answers produced by its AI search overviews.
This has added to concerns about the use of AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in code.
There are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into work.
Writers, musicians and artists have also pushed back against the technology, accusing AI developers of using their work to train systems without consent or compensation.

Thousands of creators - including Abba singer-songwriter Björn Ulvaeus, writers Ian Rankin and Joanne Harris and actress Julianne Moore - signed a statement in October 2024 calling AI a "major, unjust threat" to their livelihoods.
How does AI impact the environment?
It is not clear how much energy AI systems use, but some researchers estimate the industry as a whole could soon consume as much as the Netherlands.
Creating the powerful computer chips needed to run AI programmes also takes lots of power and water.
Demand for generative AI services has meant an increase in the number of data centres.
These huge halls - housing thousands of racks of computer servers - use substantial amounts of energy and require large volumes of water to keep them cool.
Some large tech companies have invested in ways to reduce or reuse the water needed, or have opted for alternative methods such as air-cooling.
However, some experts and activists fear that AI will worsen water supply problems.
The BBC was told in February that government plans to make the UK a "world leader" in AI could put already stretched supplies of drinking water under strain.
In September 2024, Google said it would reconsider proposals for a data centre in Chile, which has struggled with drought.
What rules are in place for AI?
Some governments have already introduced rules governing how AI operates.
The EU's Artificial Intelligence Act places controls on high risk systems used in areas such as education, healthcare, law enforcement or elections. It bans some AI use altogether.
Generative AI developers in China are required to safeguard citizens' data, and promote transparency and accuracy of information. But they are also bound by the country's strict censorship laws.
In the UK, Prime Minister Sir Keir Starmer has said the government "will test and understand AI before we regulate it".
Both the UK and US have AI Safety Institutes that aim to identify risks and evaluate advanced AI models.
In 2024 the two countries signed an agreement to collaborate on developing "robust" AI testing methods.
However, in February 2025, neither country signed an international AI declaration which pledged an open, inclusive and sustainable approach to the technology.
Several countries including the UK are also clamping down on use of AI systems to create deepfake nude imagery and child sexual abuse material.