“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” —Eliezer Yudkowsky
We’ve all heard plenty about artificial intelligence (AI) in recent months, yet it’s not new. Over the years, AI technologies were considered remarkable when they first appeared. Today, they are so much a part of our everyday lives that we don’t even think of them as AI. They are just normal tools that we use (e.g., “Hey, Alexa”). These technologies were certainly innovative but this next level, known as generative AI, takes things a step further—into freeform intelligent behavior. Let’s face it, new technology is exciting, leading to many jumping on board without fully understanding the implications. Even the “experts” are challenged to be able to identify all the pros and cons of generative AI.
What is Generative AI?
It is a type of artificial intelligence that can create new content, including text, images, videos and music. It is powered by machine learning models that are trained on massive amounts of data.
Generative AI is a powerful new technology with the potential to change the world. It is still in its early stages of development, but it has the potential to revolutionize many industries. As generative AI continues to develop, we can expect to see even more amazing and innovative applications of this technology.
Yes, But…
With any new technology, there is usually a “yes, but…” Generative AI is no exception. Yes, it is transformational, but there are risks. Yes, it can take the burden off staff, freeing them up to do more things that only humans can do. But can we trust it to provide real content and real data? Yes, generative AI is amazing. But it is not infallible.
I recently had the pleasure of attending an outstanding webinar hosted by the American Society of Association Executives (ASAE) on Critical Considerations Surrounding ChatGPT and generative AI. The panelists, including a publisher and attorney for associations and a research professor of robotics, provided some great insights into both the benefits and risks of generative AI.
First, the Benefits
Generative AI can automate tasks that are currently done by humans, freeing up time for more creative and strategic work.
It can analyze large amounts of data to identify patterns and trends that would be difficult for humans to spot. This information can be used to make better decisions about business operations, product development, and marketing campaigns.
Generative AI can be used to create new ideas and concepts that would not have been possible without the help of a machine. This can lead to new products, services, and experiences that can disrupt existing markets and industries.
In addition, it has the potential to connect users across different functions within an association, providing great opportunities for collaboration.
These are all great benefits, however, with them come some risks.
Risky Business?
As with most anything that is considered “groundbreaking,” generative AI should come with a warning label. The primary concern is the influx of fake content or data.
There is a tendency to see generative AI as intelligent. However, it’s just a statistical model built from hundreds of thousands of pages of text. It doesn’t comprehend the information it processes but has learned to correlate the information it aggregates and use those correlations to provide desired content with amazing speed. In essence, it doesn’t “know” if the information it has gathered is true or false.
Misinformation is a major issue these days and these generative AI systems aren’t helping. In fact, in a recent court case, attorneys are facing possible sanctions for citing past legal cases that helped strengthen a lawsuit filed on behalf of their client against an airline. One of the attorneys had found these cases using ChatGPT. The problem was that several of the cases were not real. In essence, ChatGPT made them up.
So, what can we do to ensure the accuracy of the information we’re disseminating? One of the attorneys in the case referenced above offered a simple solution—one that he wished he had used himself. Follow-up research to ensure the information provided by ChatGPT was legitimate would have saved him and his colleague a whole lot of pain and embarrassment.
There is no doubt that generative AI is a powerful new technology with the potential to change the world and revolutionize many industries. It is transformational, to be sure, but without considering the associated risks, it can create significant problems for you and your organization. In part two of this series, we dive into the key legal issues associations should consider when it comes to generative AI.
What happens when you ask a generative AI platform like ChatGPT for the source of the information they provided?
My request:
“Identify the source of this statistic: 90% of construction industry respondents believe that a shift toward environmental sustainability is imminent.”
ChatGPT’s answer:
“As an AI language model, I don't have real-time access to current news or statistics beyond my last training cutoff in September 2021. Therefore, I cannot provide the specific source of the statistic you mentioned. It's possible that this statistic was sourced from a survey or study conducted by a research organization, consulting firm, or industry association. To determine the exact source, I recommend conducting a search using relevant keywords or contacting industry-specific organizations that might have conducted the survey.”