Detecting locationโ€ฆ
Breaking News

UK Partners with Microsoft to Develop System for Detecting Deepfakes

UK Partners with Microsoft to Develop System for Detecting Deepfakes

LONDON: The British government has announced a new partnership with Microsoft to help detect deepfake content online. The initiative aims to set clear standards for identifying harmful and deceptive AI-generated material.

The government said it will work with Microsoft, academics, and technology experts. Together, they will design a system to test and improve tools used to spot deepfakes across digital platforms.

Deepfakes are digital media created using artificial intelligence. They can make fake images, videos, or audio appear real. These tools have existed for years, but recent AI advances have made them more realistic.

The rapid growth of AI chatbots like ChatGPT has increased concerns. Experts warn that deepfakes are now easier to create and harder to detect.

The UK has recently made it illegal to create non-consensual intimate images. This law was introduced after rising cases of online abuse and identity misuse.

Technology Minister Liz Kendall said deepfakes are being used for serious crimes. These include fraud, impersonation, and the exploitation of women and children.

She added that deepfakes also harm public trust. People now struggle to know what content is real or fake.

Under the new plan, the government will build a deepfake detection evaluation framework. This framework will test different detection tools in real-world conditions.

The system will examine how well technology can detect deepfakes used in scams and abuse. It will also study impersonation and misleading political content.

Microsoft will play a key role in providing technical support. The company will help develop and test detection methods using advanced AI tools.

The government said the framework will guide law enforcement agencies. It will also help online platforms improve safety standards.

Industry players will be expected to follow these standards. This will ensure consistent action against harmful AI content.

Government data shows a sharp rise in deepfake content. Around 8 million deepfakes were shared in 2025. This figure was only 500,000 in 2023.

Regulators worldwide are struggling to keep up with AI risks. Many countries are now reviewing laws and safety policies.

The UK action was partly triggered by recent incidents. Elon Muskโ€™s Grok chatbot reportedly created non-consensual sexual images.

These included content involving children, raising serious concerns.

The British communications watchdog and privacy regulator are now investigating Grok. Their findings may shape future AI rules.

In other news read more about Microsoft Ordered to Stop Tracking Students Online

By working with Microsoft, the UK hopes to stay ahead of these threats. The goal is to protect users and restore trust in digital content.

Facebook
Twitter
LinkedIn
Pinterest
WhatsApp

Sehar Sadiq

Trending

Latest