Finding Data to Support Thought Leadership: Are You Placing Too Much Trust in AI?


When I incorporate data into any content, especially thought leadership, I only use recent data from the original source.


And the original source must be reliable and trustworthy.


Based on my experience with ChatGPT and Gemini, artificial intelligence (AI) doesn’t follow the same standards.


The important question here is, are you placing too much trust in AI?


Here are two recent examples (there are more) from my world, just in the past two weeks.


Example 1: Allow Me to Correct Myself


I was working on a thought leadership article about marketing for the construction industry.


I asked ChatGPT for data about the effectiveness of marketing for the specific audience of this piece.


ChatGPT cited dozens of data points. None of the sources were reliable or even explained their methodology.


Not a single one.


I tightened up the prompt to limit the response to original, well-established sources.


Still, not a single reliable source.


So I tried Gemini, which cited a compelling data point with a reliable source. I went to the source but couldn’t find the data point.


Thinking I might be looking in the wrong place, I asked Gemini for a link.


Gemini said the statement it had generated was incorrect. The data point referred to marketing in general and was not construction industry specific.


For the same piece, Gemini cited another data point used by multiple articles about construction marketing.


When I asked for a link to the original source, Gemini said that there was no record of the actual study that produced the data point.


Apparently, someone either made it up or the study was pulled. Multiple recent articles ran with it but didn’t link to the non-existent study.


AI did the exact same thing.


Example 2: Dated Data


A client’s marketing department sent me a rough draft for a technology-focused thought leadership article generated by AI.


Not a fan of this approach, but it was at least based on an internal interview with a subject matter expert, not a generic prompt to “write an article about XYZ.”


The AI-generated draft included several data points that strengthened the core message of the article.


A quick search revealed that the data points were based on research from 2005, 2015, and 2016.


As a point of reference, the iPhone was introduced in 2007. Windows 10 launched in 2016 and will no longer receive support in about a month.


Again, this data was included in a thought leadership piece about technology.


This is a pretty big company. Could you imagine if the article was published without checking the data and someone called them on it?


Trust to a Fault?


I recently partnered with Propellic, an AI-first travel marketing agency, to write a report on a behavioral research study about AI’s impact on travel planning and booking.


Groundbreaking stuff, by the way.


One thing that jumped out at me is the trust people expressed across the board in the information generated by AI when researching and planning trips and activities.


This study was specific to the travel industry, and it obviously isn’t the same as citing data in thought leadership. But it got me thinking about the level of trust in AI-generated responses.


My concern is that organizations are using AI as a shortcut for content generation rather than a tool that adds value to the process and the actual content.


It’s not just that they aren’t fact-checking AI.


They’re not even questioning it.


Remember, AI scours the internet for information to generate responses to our queries.


Based on my experience, AI takes a “something is better than nothing” approach when generating responses.


Even when something hasn’t been checked for accuracy or trustworthiness.


The internet isn’t exactly a bastion of truth, and AI readily admits that it gets things wrong.


Trust but verify? Fine.


Blindly trust? Big risk.


Using AI Responsibly and Efficiently to Gather Data


I still use AI to gather data to support the content I write for clients, whether thought leadership, website content, impact reports, or case studies.


AI allows me to be very specific with my criteria and standards. It also generates a list of data points with brief summaries instead of forcing me to click through a bunch of blue links.


The goal is efficiency, not shortcuts. Big difference.


Specificity is key to achieving that efficiency. Prompting needs to be specific for AI to generate relevant responses.


But accuracy and truth are paramount.


The data we hope to find might not exist, or the existing data might not be trustworthy or reliable.


To maintain the integrity of our content and brand, we may need to find another way to make our case instead of settling for whatever “something” AI generates.


Brands need to do better. AI needs to be trained to do better.


Because lost trust is extremely difficult to regain.