The AI revolution is here—but the playing field isn’t even for everyone. The CareSide conducted a research study that reveals alarming generational gaps that, if left unaddressed, could alter how older adults participate in the digital world.
When Suzanne F., 74, logs onto social media, she’s mostly there to browse—not to post. While her friends share photos and intimate details about their lives, she opts not to participate for fear of having her identity stolen by AI.
‘I am most fearful of the fact that it can be used for taking over someone’s identity and personal information,’ she says about AI. ‘A lot of people my age share so much information, I’m not sure they’re aware of the dangers.’
Suzanne admits she has limited experience with AI, but occasionally uses it to assist with writing letters. Although minimal, that usage might put Suzanne ahead of the adoption curve in her age bracket as AI interest continues to peak across generations.
To say AI is having a moment would be an understatement.
The technology has been in development for decades, but the concept of artificial intelligence has captured imaginations for far longer than that in the confined spaces of science-fiction movies and books.
Now, though, AI is inescapable, and the line between what’s real and what isn’t becomes more blurred by the day—especially for older people like Suzanne.
It’s an unfortunate truth that seniors are frequent targets of fraud, and many of them live with a perpetual fear of getting conned. Dr. Marti DeLiema holds a PhD in Gerontology and serves as the Associate Director of Education for the Center on Healthy Aging and Innovation at the University of Minnesota. As a researcher of elder fraud and financial exploitation, she believes AI will make matters that much more difficult for older people.
‘Using GenAI, [scammers] can create convincing images, videos, and even voice models of people the target knows and trusts to make the premise of the scam more believable,’ she points out.
In the past, scammers relied mostly on door-to-door and over-the-phone schemes. But the rise of the Internet opened a Pandora’s box of new tactics conducted through email, social media, and online marketplaces. Suzanne and many others in her generation view AI as the next step in that evolution: a powerful tool that can just as easily be used for harm, even when the potential benefits of AI seem to outweigh the potential drawbacks.
Like most technologies, AI is only as noble as the people guiding it. It’s not solely a conveyor belt of low-quality slop: Companies around the globe use AI to streamline processes, and entire industries—healthcare, finance, agriculture, and many others—are leveraging AI to solve complex problems and make everyday life easier for the rest of us. Ironically, AI has become a valuable resource in combating the very scams that have long preyed on older adults.
The fact remains, though, that none of us can be sure where this AI tidal wave leads. We’re all riding that wave in the same boat, asking the same questions:
- How can we distinguish AI hype from practical innovation?
- What impact will AI have on me, my career, and the people I care about?
- And of course: How can I tell what’s real and what’s not?
Can Seniors Detect GenAI Content?
The CareSide, an aged care and home care provider based in Australia, conducted a research study to gauge people’s ability to detect AI-generated content. Because we work closely with the senior population, we wanted to better understand how older adults perceive generative AI, how they interact with it, and most importantly, how accurately they can identify it online compared to younger generations.
To explore this, we created an AI Detection Quiz that challenged participants to identify AI-generated multimedia they might encounter on social media, in email or over the phone. The quiz simulated real-world scenarios by presenting content and asking participants if it was created by a human or by AI.
Some questions were intentionally more challenging than others, but we aimed to produce a broad range of scores, not stump participants with maximum difficulty. The average score across all participants (71.1%) suggests we struck that balance successfully.
The quiz featured 15 total questions, including videos, photos, voice messages, and written messages. Each question had two possible responses: Participants either had to identify which piece of multimedia was AI-generated, or they had to answer ‘Yes, this is AI-generated’ or ‘No, this is not AI-generated.’
A total of 3,198 people completed the quiz. Because the quiz is in English, it was shared with individuals living in English-speaking countries, including Australia, the United States, Canada, the United Kingdom, New Zealand, South Africa, and Ireland. Participants were only able to take the quiz once.
Respondents provided their exact age and were grouped into four separate age cohorts for analysis:
- 18–29 year-olds: 12.2% of participants
- 30–44 year-olds: 13.2% of participants
- 45–64 year-olds: 31.8% of participants
- 65 years old and greater: 42.8% of participants
Seniors Struggle the Most to Identify GenAI Content
Scores declined significantly with age. The average scores for the 18–29 and 30–44 age groups were 79.8% and 77.7%, respectively. By comparison, the average score for seniors aged 65+ was 65.5%.
The results reveal that older adults can detect AI-generated content only moderately better than a coin flip. Because each question had only two possible answers, a score of 50% indicates that a person cannot detect AI better than simply guessing.
While seniors didn’t struggle quite to that extent, the data did produce statistically significant differences in accuracy across age groups.
We ran a one-way ANOVA to compare overall scores across age groups. The dependent variable was the percentage of correct answers, and the independent variable was age range. The test produced a statistically significant result, F(3, 3194) = 176.6, p < 0.001, indicating that at least one age group performed differently from the others.
We then conducted a Tukey HSD (Honestly Significant Difference) post-hoc test to drill further and explore pairwise comparisons between age groups. The mean difference in score percentage between each age group and the corresponding p-values are included in the table above. There was no statistically significant difference in performance between the 18–29 and 30–44 age cohorts. However, all other comparisons showed highly significant differences, with older age groups scoring significantly lower than younger groups in each scenario.
Later in the report, we expand on this analysis by adding two covariates to the linear model to increase the confidence in the findings and account for potential confounding effects.
Seniors Lack AI Confidence—and Online Self-Awareness
At the start of the quiz, we also asked participants a simple question: ‘How confident are you in your ability to determine whether a piece of online content is AI-generated?’
Our goal was to examine whether people’s perceived ability to distinguish between AI and human-generated content aligned with their actual performance.
Participants chose from five options on a Likert scale: Not-At-All, Slightly, Moderately, Very, and Extremely.
Unsurprisingly, confidence declined with age, perhaps reflecting apprehension and lack of comfort many older adults feel toward new technologies.
Since confidence level is a categorical variable, we ran a Pearson’s chi-square test and an ordinal logistic regression to determine if the differences in reported confidence level were statistically significant. The chi-square test found the distribution of confidence level varied significantly across age range, X2 (12, N = 3198) = 302.12, p < 0.001.
The ordinal logistic regression confirmed this further. Older adults were significantly less likely than younger adults to report higher confidence in identifying AI-generated content. The results displayed in the table below from the model highlight that confidence declines steadily with age. Each predictor is compared to the 18-29 age group.
Still, among the 1,368 quiz participants in the 65+ age range, 49.9% (683 respondents) considered themselves moderately confident in their AI-detection abilities, while 13.2% (180 respondents) said they were very or extremely confident.
Despite that self-assurance, their scores remained lower than those of younger age groups and showed little improvement compared to individuals in the same age group who reported a lower confidence level (i.e. not-at-all or slightly confident).
The youngest participants (ages 18–29) exhibited the most accurate self-awareness. Their confidence closely aligned with their performance. For example, individuals in the youngest cohort who said they were not-at-all confident only scored 64.8% on average, even below the overall average score for Seniors. However, individuals in the same age bracket who said they were very or extremely confident in their ability to detect AI-generated content scored considerably higher at 79.9% and 82.4%, respectively.
Participants in the middle age brackets (ages 30–44 and 45–64) were roughly average in that respect. Their scores increased with higher confidence levels, except for those who said they were extremely confident in their ability. For both age groups, respondents who said they were extremely confident actually scored lower on average than those who were very confident.
The findings suggest that a large contingent of older adults—scrolling their social media feeds, opening their emails, watching videos, and sharing what they find online—genuinely trust their ability to spot AI-generated content. However, in reality, many struggle to distinguish between human and AI-generated material.
In our study, seniors misidentified content by producing a false positive or false negative roughly one out of every three times, even when they were very or extremely confident in their ability to tell the difference. Not only does that make them more susceptible to scams and fraudulent schemes, but it also means they’re more likely to unintentionally contribute to the spread of misinformation that increasingly floods the Internet.
Social media, in particular, has become a breeding ground for false information.
Social media platforms use complex algorithms to analyse and predict user behaviour—such as likes, shares, comments, watch times, and purchases—to curate personalised content feeds. The more users engage with a certain type of post or advertisement, the more they’re shown similar types of content, creating a feedback loop.
When people engage with AI-generated content on social media, whether they know it or not, they signal to the platform’s algorithm to serve up similar content. The self-reinforcing nature of these algorithms becomes especially problematic when false or deceptive material is involved.
We asked Andrea Rosales, a data and communications researcher at Universitat Oberta de Catalunya in Barcelona, about algorithmic harms and how they might impact unsuspecting Internet users.
‘Closely related to a user’s confidence and ability to detect fake content is the concept of algorithmic awareness, which refers to being aware of the use of AI systems and understanding how they work,’ she explains. ‘Low algorithmic awareness takes individuals away from the (limited) possibilities to control self-representation, assess algorithmic outputs, resist algorithmic dictatorship, and prevent scams.’
It stands to reason, then, that the older adults who struggled to identify AI-generated content in the controlled environment of our study are likely having an even more difficult time doing so amid the noise of their actual social media newsfeeds.
To be clear, we did not design our study to be an elaborate “gotcha,” nor is this report an argument against older people being active online.
Everyone deserves a safe, reliable, and honest online experience regardless of age, location, or socioeconomic status. The real question is: How can we make that a reality?
At least part of the answer lies in understanding what types of online content cause the most confusion, particularly amongst older adults. Our research findings offer some insight.
The Most Difficult Types of AI Content to Identify
While seniors had the most difficulty identifying AI-generated content in our quiz, younger age groups were far from perfect. Interestingly, all age groups struggled the most with the same types of content: audio and text.
Of 15 total questions, three were audio-based questions and two were text-based.
AI-generated audio posed the biggest challenge across all four age groups, especially for seniors, who had roughly a coin flip’s chance of correctly determining whether a voice message was real or generated by AI.
We conducted a one-way ANOVA and Tukey HSD post-hoc test to determine if there was a difference in performance among individuals aged 65 and older across content types. The one-way ANOVA, F(3, 5448) = 102.9, p < 0.001, indicates that performance varies significantly depending on the type of content. The table below displays the results of the Tukey HSD test for pairwise comparisons.
Scammers have leveraged over-the-phone tactics for decades to exploit older people. These results suggest that distinguishing legitimate calls or voicemails is only getting more challenging—particularly for the senior population.
Today, bad actors can use AI tools to ‘clone’ voices and impersonate loved ones and authoritative figures. AI voice scams will likely become even more common with the proliferation of AI technology. In Australia, they’ve already evolved from the infamous ‘Hi Mum’ scam, a text message-based scam with a high success rate when used to target individuals in the 35- to 45-year-old age range.
Younger people falling victim to text scams might seem surprising, but it is consistent with our findings: every age group struggled with text-based questions nearly as much as audio-based ones.
These days, AI-generated text permeates nearly all corners of the Internet.
Generating AI text is as easy as visiting ChatGPT or Claude, typing in a basic prompt, and letting the chatbot write the copy.
Even Suzanne, the woman featured at the beginning of this report who doesn’t trust AI, admits she sometimes uses it to help her with writing letters.
‘It offers different ways of expressing yourself, and I’ve really enjoyed that,’ she says.
Although AI-generated text has plenty of tells (at least for now), it can still be tricky to spot, especially if you’re just casually browsing online. The emergence of other tools, including audio generator apps and Sora from OpenAI, make creating realistic-looking videos and sound clips just as easy as generating AI text. The barriers to creating AI-generated media have all but disappeared, and the gaps between real and AI content become slimmer by the day.
Experts often recommend ‘slowing down’ as a strategy for deciphering AI-generated content—that is, pausing to consider what you’re reading or viewing for a moment instead of absorbing it without a second thought and moving on. It’s certainly logical advice.
However, our research suggests that slowing down isn’t always a foolproof way to identify AI content.
Spotting AI Content: More Time ≠ More Success
We live in a multitasking world—a period in history when countless distractions throughout a given day try to ‘hijack’ our attention.
That’s increasingly true online, but up until recently, we didn’t necessarily have to question if what we’re reading, viewing, or hearing was created by artificial intelligence.
Overall, we found significant differences in how long it took participants to complete our AI Detection Quiz. On average, the 18–29 age group completed it in roughly five minutes. Completion times increased with age, up to an average of just over eight minutes for participants 65 and older.
We conducted a one-way ANOVA and Tukey HSD post-hoc test to determine whether the completion time differed by age group. The one-way ANOVA, F(3, 3194) = 38.93, p <0.001, revealed a significant effect of age on total time spent. The table below summarises the pairwise comparisons from the Tukey HSD test.
But here’s the interesting part: taking longer didn’t help.
We ran two statistical tests to analyse the relationship between time spent on the quiz and overall score, and the verdict was quite clear: taking longer to complete the quiz did not yield better results, even when controlling for age and confidence level.
- Partial Correlation: The partial correlation test showed no significant correlation between completion time and score, r = 0.023, p = 0.232.
- Multiple Linear Regression: We then examined how participants’ precise age, confidence, and time spent on the quiz predicted their performance on the quiz. The model explained 15.8% of the variance in the scores.
Time spent did not predict overall score (b = 0.001, p = 0.121). In other words, taking longer didn’t drastically improve or hurt scores.
Age, on the other hand, did predict scores: older participants still tended to score lower, on average. Each additional year in age was associated with a 0.27-point decrease in score (b = -0.270, p <0.001).
Confidence also played a role. Compared with participants who said they were extremely confident, those who were slightly confident (b = -2.798, p = 0.022) or not-at-all confident (b = -5.467, p < 0.001) scored significantly lower.
Among seniors, the results were similar. A separate regression found that spending more or less time on the quiz did not reliably predict whether they scored higher or lower. Taking extra time didn’t help them do better, but going quickly didn’t seem to hurt their scores, either (b = 0.00001, p = 0.990). Again, we found that age remained a significant factor. For respondents 65 and over, each additional year of age was associated with a 0.35-point decrease in score (b = -0.350, p < 0.001).
Hopefully, it goes without saying, but these results are not an indictment of ‘slowing down.’
It’s always a good idea to take a moment of consideration, especially while navigating around the Internet. But as the line between real and AI-generated content seems to blur more by the day, that extra moment of reflection might not be the trusted safeguard it once was.
Where Do Seniors Fit in the AI Equation?
On the surface, the results of our AI Detection Quiz aren’t surprising: older adults struggled to identify AI-generated content online decidedly more than younger age groups.
But bigger questions emerge when you dig into the data and extrapolate what those numbers could indicate on a larger scale. Where do seniors fit in all of this? Are they simply passive participants in the AI revolution, or do they deserve a seat at the decision-making table? Do they even want a seat at the table?
Rayid Ghani is a Distinguished Career Professor at Carnegie Mellon University, where he specialises in AI, machine learning, and data science. Mr. Ghani was the Chief Scientist of U.S. President Barack Obama’s 2012 election campaign, and he also founded the Center for Data Science & Public Policy at the University of Chicago. In more recent years, he has testified in front of the U.S. Senate about how to address some of the structural challenges AI systems often face. He offers a stark warning about the possibility of the AI revolution leaving behind large groups of people.
‘When AI systems are designed primarily to maximise efficiency or reduce cost, they tend to focus on the people who are easiest to reach, easiest to engage, and least expensive to serve,’ he says. ‘That can worsen existing disparities.’
Suzanne, the older woman from earlier in the report who strictly uses AI to help her write letters, typifies Mr. Ghani’s point. She says she wants to learn about AI—but she doesn’t know where to start.
‘One of the biggest problems for people in my age group is that once you’ve retired from the workforce, you don’t have anyone helping you stay up-to-date with technology,’ she says. ‘I want to learn more about AI, but I don’t have the knowledge or anyone to ask.’
Suzanne speaks proudly, if not somewhat hesitantly, about her online safety. Her antivirus software is current, and she even hired an IT professional to secure her email accounts and ensure her identity isn’t floating around the dark web. She’d like to take a similar approach to learning about AI, but presently, educational opportunities for people her age seem few and far between.
‘Honestly, I wish I’d been born 20 years earlier so I wouldn’t have to deal with any of this,’ she admits.
Suzanne’s frustration isn’t unique. We spoke to multiple seniors for this report who expressed the same sentiment.
They know they’ve fallen behind, but as of yet, no one is helping them catch up.
Making AI Work for Everyone
Education and regulation will be critical pieces of the puzzle as we continue to navigate this unsettled AI terrain.
Fred Heiding is a research fellow at Harvard University’s Defense, Emerging Technology, and Strategy Program, where he studies AI cybersecurity threats. He asserts that government policies need to be strengthened to ensure online experiences remain safe for everyone, not just seniors.
‘We need much more stringent policies and regulations against fraud—it is very difficult for users to know what it really means to be safe and responsible online,’ he says. ‘Governments must create an environment where it is easy for users to make informed decisions.’
Mr. Heiding compares it to buying groceries at the supermarket: We all know some foods are healthier than others, so we make informed decisions about what we purchase. In that sense, it’s easy to eat healthy food if that’s what we want. Similarly, we must create a digital environment where it’s easy to be safe online.
Regarding education, AI ethics and literacy courses may no longer be niche subjects reserved for grad school students—they’ll likely need to be core subjects taken alongside health and physical education at the elementary and high school levels.
Of course, that doesn’t much help older adults like Suzanne, who want to learn more about AI but have long since left school and the workforce. Resources such as Senior Planet from AARP are useful, though ironically, accessing them requires some degree of technical proficiency that many older adults are trying to develop.
Some experts point out that even education might only be so effective.
Dr. Marti DeLiema, the Associate Director of Education for the Center on Healthy Ageing and Innovation at the University of Minnesota, believes we are approaching a point none of us, regardless of age, will be able to accurately detect AI-generated content.
‘I am not certain that education around AI will prevent mistrust, and in fact, I suspect the opposite effect may occur: The more you know about AI, the greater mistrust you will feel about digital communication,’ she says. ‘But maybe a bit of cynicism is actually protective.’
To Dr. DeLiema’s point, people in younger generations might already possess that ‘protective’ mistrust. Those who were born into or at least raised in this highly digitised world have an inherent awareness that older adults have to develop from scratch. Although AI is a new technology for the majority of the population, it makes sense that Gen Z and Millennials are in a better position to adapt compared to older adults who grew up in the pre-Internet world.
Dr. Judith Bishop holds a PhD in linguistics and works as a full-time researcher at La Trobe University in Melbourne, Australia, where she studies the intersection of language and AI technology. She contends that short-term AI training for older adults may not be effective in the long run because of how quickly AI systems are evolving and becoming more human-like.
‘Informed proactive scepticism is the best approach—not training older adults to be better AI detectors,’ she says. ‘This individual due diligence approach may, in time, be supported by regulatory approaches—such as watermarking AI-generated content—where the information is likely to influence decision-making processes or purchase decisions.’
Dr. Bishop notes that today’s seniors have lived most of their lives in a world where authenticity in communication was just assumed. Younger generations, by contrast, have grown up watching that expectation dissipate—especially regarding AI-generated communications.
Clearly, we have a long way to go before AI can truly be considered a tool for everyone. We’re at the advent of this technological revolution, and the majority of the story has yet to be written. But if the results of our quiz mirror real-world trends, then they bring to light several pressing areas for discussion:
- AI education and strategies to protect vulnerable populations from AI-related harm, especially scams and misinformation
- Developing AI models that don’t leave behind large groups of people, such as older adults and the elderly, who may be more challenging or costly to serve
- Ageism in technology and the lack of senior representation on leadership teams at AI companies
- Perverse incentives fueling the AI arms race, including widespread disinformation and the prioritisation of user adoption, business growth, and financial gain over authenticity and safety
Indeed, we may have already reached a point in our collective AI journey where the safest course of action is to assume a piece of online content is fake unless proven otherwise.
But participants in our study didn’t have that luxury.
They were presented with just two choices, and with those constraints, seniors aged 65 and older accurately identified content as human or AI-generated only moderately better than chance.
The findings of our research hint at a concerning trend: a widening gap between the pace of AI advancement and the ability of large segments of the population to keep up. If left unaddressed, this divide could redefine how older adults participate in the digital world.
Study Limitations
We found a strong correlation between age, confidence level, and completion time. Future studies could build upon our regression model and reduce potential multicollinearity by examining additional factors such as socioeconomic status, education, and gender.
Since participants in our study had to opt in voluntarily, there may be some self-selection bias at play. For example, respondents may be more tech-savvy or interested in AI than the general population.
Finally, AI technology is advancing rapidly, which will likely make it even more challenging in the future to distinguish between human and AI-generated content. As a result, the findings in this study should be seen as a snapshot in time rather than long-standing conclusions.
Methodology
This study was conducted by The CareSide between July and September 2025. Responses were collected from 3,198 participants in English-speaking countries. The data was collected via an AI detection quiz; the quiz was promoted and distributed through social media, advertisements, and the company newsletter. Participants were only allowed to complete the quiz once.
Addendum: Does AI pose a risk to seniors?
The results of our research led us to numerous questions.
Clearly, seniors struggled to identify genAI content—even when they were confident in their ability to do so. Overall, 63% of seniors considered themselves moderately, very, or extremely confident in their AI detection skills.
Does that self-assurance mean the vast majority of seniors are accepting of AI? Do they still view it as a potential threat, specifically as it pertains to scams and fraud? What are their views on AI safety?
We conducted a follow-up survey to learn more. A total of 1,236 people aged 65 and over completed the survey. Anonymous responses were collected from English-speaking countries, including Australia, the United States, Canada, the United Kingdom, and New Zealand.
Seniors Are Concerned About AI Scams
The first question in our survey asked point-blank: How concerned are you about AI-driven scams?
More than 95% of seniors reported feeling at least some level of concern, and more than 70% said they feel very or extremely concerned.
When you combine the results of our AI-detection study with the results of the survey, a story begins to materialise: Although seniors are confident in their ability to detect AI, most of them are still concerned about being exploited by AI-driven schemes. And with good reason—more than 1 in 5 respondents disclosed that they’ve already fallen victim to scams or misleading messages they later suspected were AI-generated, a number that will likely rise as AI tools become better at mimicking human communication and behaviour.
Suffice to say, that many people falling for fraudulent schemes is concerning. But even that figure doesn’t tell the full story, because another important question is: How frequently are seniors being targeted?
To get a clearer picture, we asked: In the last 12 months, have you received a suspicious message, email, or phone call you believe was generated by AI, such as a deepfake voice, automated text, or phishing email?
The majority (59.5%) reported receiving a suspicious message within the past year that they believed was generated by AI.
Interestingly, older people aren’t just concerned about falling victim to AI-related dishonesty themselves. Many fear they’re contributing to the spread of misinformation online, with more than 63% of our survey respondents saying they worry they’ve unintentionally shared, liked, or commented on an online article or social media post within the last 12 months that was false or inaccurate.
Again, when you combine these survey results with the data from our earlier AI detection research, more questions emerge. For starters, if seniors struggle to identify AI content (particularly text and audio), yet most believe they’ve received suspicious AI-generated messages or unknowingly spread misinformation online, what is the true number? How often are they targeted and fail to detect it? How often do they amplify false content without even realising it?
We can’t answer those questions with any degree of certainty, but that’s the problem: in an AI-driven world, certainty is becoming a myth. That’s especially true for people who lack the tools and digital savvy to detect AI.
Preserving Trust In a World of AI
The final two questions in our follow-up survey focused on trust. A resounding 91.6% of respondents believe labelling AI-generated content (with a watermark or other indicator) should be mandatory.
We also asked participants if they’d trust news organisations less if those outlets used AI to write some of their online articles. Nearly 70% of respondents said their trust would indeed diminish, indicating that news pieces still carry more credence if they have a human touch—for older generations, at least.
These additional findings raise more questions, but ultimately, they all circle back to the foundational issue raised in our original report: How do we make AI work for everyone?
There’s a growing drumbeat for resources, tools, and laws that protect younger people from AI-related harms, and advocacy groups are even stepping up to shield children from the darker sides of AI. But by and large, older adults aren’t currently prioritised in those conversations. Our survey results highlight why that’s an issue: If the vast majority of older people are concerned about falling victim to AI-driven scams, and more than 1 in 5 seniors believe they’ve already been victimised, that not only deserves attention, it requires action.
Rayid Ghani, Fred Heiding, Dr. Judith Bishop, and the experts who contributed to our original report all posit that action begins at a systemic level, including stronger regulations, improved representation, modernised policies, and public awareness campaigns to protect seniors and other marginalised groups from AI-related dangers. Ongoing research—including this recent study from Mr. Heiding demonstrating how attackers exploit AI safety failures to harm vulnerable populations—is already shedding light on the urgency of the situation.
But as it stands, the AI playing field remains uneven, and much of the discourse around AI ethics and safety overlooks the needs and vulnerabilities of older adults.


























