VidCruiter Logo

Are AI Detectors Accurate? What You Need to Know

Written by

Kim Jamerson

Reviewed by

VidCruiter Editorial Team

Last Modified

Nov 5, 2024
AI detectors hero

SHARE THIS ARTICLE

  • LinkedIn
  • X icon
  • Facebook
  • URL copied to clipboard!


SUBSCRIBE TO OUR NEWSLETTER

OpenAI’s launch of ChatGPT in late 2022 revolutionized how the world creates content. Hundreds of artificial intelligence (AI) generation tools have flooded the market, allowing people to streamline or fully automate the creation of text, images, videos, audio, programming code, etc. Users simply need to enter a few guidelines (prompts) about what they want, and seconds later, they have an AI-generated result.

The rapid rise in these tools' popularity and sophistication has led to an equally rapid desire (need) to detect whether content has been generated using AI. For example:

  • Education: The learning process can be undermined if assignments are completed using AI vs. student-created. Teachers need to be able to tell the difference.
  • Recruiting and Hiring: Many roles require pre-employment assessments to ensure candidates have critical skills (e.g., coding, writing). Hiring managers need to know submissions are authentic.
  • Business: Many companies embrace the use of AI to improve efficiency and scalability. But in situations where they need or prefer wholly human-created content, they want to know they’re getting what they paid for.
  • Public Discourse: When photos, video, and audio can be quickly built from scratch or manipulated using generative AI, misleading and fraudulent content ensues. Reporters, voters, and citizens need to be able to distinguish fact from fiction.

The tech market has responded with a flurry of AI-detection solutions. However, a key question remains: Are AI detectors reliable enough to be widely adopted across different industries and content types? This article explores the accuracy, limitations, and practical applications of AI checkers, offering key insights to ensure fair and effective use.

AI detectors

What Are AI Detectors?

AI detectors are tools designed to identify whether content (e.g., text, images, video, audio, and code) was generated using artificial intelligence. For writing, these solutions analyze syntax, word usage, and language patterns to determine if AI models like GPT-based systems created the text. For other media, AI checkers assess visual or auditory clues, such as pixel patterns, speech intonation, or frame inconsistencies, to detect AI involvement. Similarly, they can analyze code for recognizable AI-generated patterns.

AI detectors have become increasingly popular where originality, authenticity, and authorship are essential. They come in various forms, from standalone platforms to features integrated within proctoring software, plagiarism checkers, content management systems, and security protocols.

What are AI detectors

Common AI Detectors and Their Use Cases

Numerous AI detectors are available on the market. Some are designed for specific contexts, such as academic settings, while others are more general-purpose. Here are a few examples:

  • Turnitin: Widely used in educational institutions for plagiarism checking, Turnitin recently added AI-detection capabilities.
  • GPTZero: One of the first text detectors, GPTZero is tailored to detecting AI-generated text from OpenAI's models, but its effectiveness varies with newer AI versions.
  • Originality.ai: Another popular AI detector for writing, this platform also offers scans for readability, plagiarism, and fact-checking.  
  • Reality Defender: Provides AI detection across images, videos, and audio, helping organizations spot deepfakes or AI-generated content in various formats.
  • Sensity AI: Specializes in detecting deepfakes and AI-generated images.It’s commonly used in law enforcement and media verification.

Are AI Detectors Accurate? 

In short, no. AI checkers are not always accurate. They can produce false positives (i.e., flag human-written text as AI-generated) and false negatives (i.e., fail to detect AI content).

Understanding how AI content checkers work can explain their inconsistencies. Let’s focus on text detectors to keep it as simple as possible.

How AI Detectors Work

Most AI detectors rely on complex algorithms that analyze text patterns and compare them against known characteristics of AI-generated content. These patterns include sentence structure, word repetition, and the complexity of ideas presented.

Meanwhile, AI-generated content is designed to simulate human writing closely.  Through machine learning and other innovations, the content these tools create keeps improving—making it increasingly challenging to detect consistently.

Think of it this way. If AI-content generators are in the infancy stage, then AI-content detectors are in the zygote stage. Both technologies are evolving quickly, but “detectors” will always be playing catch up to their “generator” brethren.

How AI-detectors work

Common AI Detectors Failures

Experts agree AI detector accuracy depends on many factors, such as the model used, the version of AI it’s built to detect, the volume and quality of data used to train the AI, and the type and complexity of the content being analyzed.

Three current common reasons for false positives by AI text detectors include:

  • Non-Native English Writers: A Stanford study found that writing produced by non-native authors was consistently miscategorized as AI-generated, with an average false-positive rate of 61.3% (Science Direct). 
  • Use of Grammar/Spelling Checkers: AI-detectors often erroneously flag content as “AI-generated” if the writer uses commonly accepted tools (e.g., Grammarly) that highlight typos and spelling errors or make suggestions to rewrite a sentence for clarity. 
  • Search Engine Optimization: Ironically, writing with search engine optimization (SEO) in mind often gets flagged as AI-generated content even when it’s 100% human-produced. SEO content is designed to make it easy for searchers (via search engine algorithms) to find the information they’re looking for and often feeds AI suggestions.
AI detectors failure

Why You Should Use AI Detectors Cautiously

Given the current limitations of AI detectors, it’s easy to see the potential risks of using them for decision-making. That’s particularly true in high-stakes scenarios, such as passing or failing a student or hiring or disqualifying a job seeker.

Organizations considering using AI detectors should ask themselves important questions:

  • Why is AI detection being used? 
  • How important is original, AI-free creation in this situation?
  • What are the consequences of an AI-detection assessment being wrong?
  • Is this type of content at a higher risk of being assessed inaccurately (e.g., multilingual content)? 
  • Are there additional or alternate ways to evaluate the content?

Some AI checkers present results as a confidence score (typically a percentage) indicating the likelihood of the content being AI-generated. Originality.ai uses this approach. If it scores a document as “90% AI, 10% Original,”  it is 90% confident that the text was written by AI. It doesn’t mean that 90% of the content was AI-generated, and only 10% was created by humans.

Other AI checkers score content based on the percentage of text in a document they think might have been AI-generated. Grammarly takes this approach: “50% of your document appears to be AI-generated (contains patterns often found in AI text).”

Anyone using an AI detector must understand how to interpret scores, that confidence thresholds vary across tools, and that ratings aren’t always reliable.

AI Detection in Action

Real-world examples are often the best way to understand a technology’s capabilities and limitations. Here’s a simple test we ran using two AI-detection tools to evaluate text we created.

Approach

Step 1: Created four 130-ish word summaries about Savannah cats:

  1. 100% human written
  2. 100% AI-generated (with a prompt designed to mimic the human content)
  3. 50/50 with the human intro (first half from summary 1, second half from summary 2)
  4. 50/50 with the AI intro (first half from summary 2, second half from summary 1)

Step 2: Ran all four examples through Originality.ai and Grammarly to detect AI usage.

Step 3: Compared the results, which varied wildly between the two platforms and even within the same solution.

Results

Summary Authorship

Originality.ai

Grammarly

1. Human written

Likely Original
(99% confidence)

0% AI-generated text detected

2. AI-generated

Likely AI
(100% confidence)

33% AI-generated text detected

3. 50/50 w/ human intro

Likely AI
(100% confidence)

50% AI-generated text detected

4. 50/50 w/ AI intro

Likely AI
(97% confidence)

0% AI-generated text detected

Beyond its overall score, Originality.ai offers additional line-by-line confidence insights, using a color scale that ranges from dark green (likely human) to dark red (likely AI), with lighter shades to reflect somewhere in between.

Summary 1 (100% human written)

AI detection example 1

Summary 2 (AI-generated)

AI detection Example 2

Summary 3 (50/50 w/ human intro)

AI detection Example 3

Summary 4 (50/50 w/ AI intro)

AI detection Example 4

Conclusions

This basic test wasn’t designed to compare AI-detector solutions or show the strengths or weaknesses within an individual platform. Instead, it was meant to highlight some current limitations of AI-detector technology and why organizations should consider it as a single data point/perspective, not as the final arbiter in decisions.

Proctoring as an AI-Detector Alternative

Many organizations are using proctoring tools as an alternative (or complementary solution) for ensuring academic integrity, employee competency testing, etc.

Proctoring involves real-time monitoring of candidates or students during tests, interviews, or assessments to ensure they don’t use unauthorized assistance. For example, proctoring software like ExamSoft or ProctorU helps verify test-takers authenticity by tracking behavior, monitoring screens, and flagging suspicious activity.

While proctoring does not explicitly target AI-generated content, it can help maintain fairness by ensuring individuals complete specific tasks (e.g., writing assessments) in a controlled environment.

Wrap Up

While AI detectors provide some value, their potential for false positives and negatives means organizations should be cautious when relying on them for critical recruitment, education, and business decisions. The same is true for alternatives like proctoring.

Both approaches require human oversight and judgment that simply can’t be replaced by AI…at least for now.

AI detection wrap up

Frequently Asked Questions

What is AI-detector accuracy?

Accuracy in AI detection refers to how well an AI checker can correctly distinguish between human and AI-generated content. Higher accuracy means the tool makes fewer mistakes and delivers more reliable results.

Why do employers use AI detectors?

Employers use these tools to ensure the authenticity of written content, such as resumes or work samples, and pre-employment screenings. Inaccurate results could lead to a misjudgment of a candidate’s abilities, unfair hiring decisions, or legal risks.

What factors affect accuracy in AI detectors?

The accuracy of AI text checkers can be influenced by several factors, including the quality of the training data, the complexity of the content being analyzed, and whether the AI detector can adapt to new language patterns. Tools also perform better when regularly updated with new datasets and algorithms.

How can AI-detection tools be tested for accuracy?

AI detectors can be tested by running various human-generated and AI-generated content through them. Comparing the tools’ results with known data can help assess accuracy.

Can AI-detection tools replace human judgment?

No, AI detection should not replace human judgment. While AI checkers provide valuable insights, human review remains essential for assessing context, nuances, and overall fairness. AI detectors should be used as support tools rather than the ultimate decision-maker.