In our recent webinar, “AI, ChatGPT: The Future of Constituent Communication,” we were fortunate to have AI expert and former US Congressman Will Hurd speak with Prompt.io CEO and Founder Phil Gordon.
For almost two decades during his career in congress and the CIA, Will Hurd has been involved in the most pressing national security issues. This includes helping to advance substantive legislation and a national strategy for Artificial Intelligence. Currently, Hurd serves on the Board of OpenAI, the maker of ChatGPT.
During this webinar, Will and Phil talked about how AI tools are currently being used, the future AI tools in communication, and the work that still needs to be done around new technologies.
Answers to some of the most pressing questions about AI:
Q: If two different people give the exact same parameters, will the output be the same?
The answer is no. This is a statistical model, and the answers that you'll get back with the same exact query will be different and you can ask it to compose a different answer.
Q: How do you deal with the fact that ChatGPT is frequently unavailable because it's over capacity?
The ChatGPT is a training instance. The level of interest in ChatGPT was unexpected, and this is ultimately an interesting thing for us. Our goal is to get to AGI. That's the focus in tenor.
What I found is that if you go to the beta.OpenAI.com/playground and not go to the ChatGPT, you'll get nearly 100% availability. It doesn't have all of the memory and context stuff that the ChatGPT itself has, but it does use the same underlying engine.
Q: Does anyone own the copyright to any given output? Is it expected that using ChatGPT would need to be disclosed in any way?
There's currently some disclaimers through the OpenAI playground. I don't believe that there is a requirement to say something was generated by ChatGPT, but anytime people send emails or a text from a campaign or an individual, I think some of those same approval rules would apply.
It's not like you're gonna turn this on and tell the ChatGPT or your AI engine, “Hey, every day at two o'clock, send a tweet out about some topical issue.” Everything that is generated here can be thought of as a boilerplate or an idea generator.
And I think you would still be held responsible if you publish anything that is generated here under your own social media account or send an email that was generated here, no one's gonna know that, that it was generated by AI. You're gonna be held accountable for it.
We're dealing with some in-house initiatives within OpenAI about disclosure. I think being honest and open with the people you're engaging with is the best use case. People that are using chatbots now know you're talking to a chatbot, right? So I think using the same values that we've used in other forms of communication are important.
Q: There are stories about people being able to tell what was generated by ChatGPT/AI versus a real person. What have you seen?
I haven't tested it but I think that is correct. Internally, we're looking at initiatives in order to figure this out as well. How we do it has not been decided.
I think there'll be a lot of research here. Anytime you have a technology that's gonna be deployed as widely as this, it'll be important to be able to know where it was generated with some kind of level of accuracy.
Q: What happens if there are dueling AI engines?
I'm a fan of competition. And the idea that there's other entities that are competing in this space is good. Making sure that we're all focused on the end outcome of having a tool that's actually good net positive for humanity. I think consumers are gonna have a choice. I'm obviously preferential to the organization that I am proud to be a part of, but think you're gonna see a lot of competition in the very near future.
Want to hear the full conversation?
Get the rest of this inside scoop on OpenAI from Will Hurd and Phil Gordon plus a full demonstration of ChatGPT in our latest webinar, “AI, ChatGPT: The Future of Constituent Communication.”