That question is likely top of mind for anyone who has seen or played around with ChatGPT, the AI-powered chat tool from OpenAI, the $20 billion AI research organization.

Since the tool’s release on Nov. 30, a surefire way to go viral on Twitter has been to post a transcript showing ChatGPT — built on top of OpenAI’s large language models (LLM) — doing very passable white-collar knowledge work.

To be sure, the output is far from perfect. Some ChatGPT answers have bias, circular logic and inaccuracies, which are often disguised by very confident prose.

However, the range of topics and speed with which ChatGPT can spit out a first draft are jarring.

Legal documents? Check. Financial analysis? Check. Cold sales pitches? Check. Corporate strategy? Check. Coding? Check. Comedy? Not quite (as someone who writes dumb jokes on Twitter all day, ChatGPT’s current inability to crack humor gives me a sliver of life hope).

Ethan Mollick, an innovation professor at The Wharton School of the University of Pennsylvania, applied ChatGPT to his own job and showed that it could create a credible course syllabus and lecture notes.

“I think people are underestimating what we are seeing from ChatGPT,” Mollick tells me. “If you are a white-collar worker, this is transformative for productivity.”

And that’s with the current OpenAI LLMs. The organization is slated to release a much more powerful LLM in 2023 and Google has been working on one for years (full disclosure: I co-created a research app built on top of LLMs).

Mollick says the key to understanding ChatGPT’s potential is to recognize its real strengths. While the current chat AI may fall short on factual and predictive tasks, it’s a powerful tool for revisions and ideations.

Of course, mileage will vary for every role and depends on how many errors you’re willing to tolerate in your work. Take creative writing. It requires a lot of idea generation, and mistakes can be quickly fixed without creating harm. Conversely, you probably want more factual certainty and fewer revisions in managing a nuclear power plant.

In a recent article, Mollick shows four ways to interact with ChatGPT to demonstrate its promise as a creative aid (including designing a game and bantering with it as a “magic intern with a tendency to lie, but a huge desire to make you happy”).

Across white-collar industries, Mollick believes people “working with AI is better than just AI.” The question becomes, in what percentage of each industry can the AI and human combination outperform just AI? Is it 10%? 20%? 30%?

Former Bloomberg Opinion columnist Noah Smith and well-known pseudonymous AI researcher roon also laid out a future path for human-AI collaboration dubbed the “sandwich model.”

• Human gives AI a prompt (bread)

• AI generates a menu of options (hearty fillings)

• Human chooses an option, edits and adds touches they like (bread)

Smith and roon said the workflow is for any type of generative AI (text, visual etc.) and rattled off some very relevant examples:

Lawyers will probably write legal briefs this way, and administrative assistants will use this technique to draft memos and emails. Marketers will have an idea for a campaign, generate copy en masse and provide finishing touches. Consultants will generate whole powerpoint decks with coherent narratives based on a short vision and then provide the details. Financial analysts will ask for a type of financial model and have an Excel template with data sources autofilled.

Practically, roon tells me that everyone should “stay on top” of AI developments in their field. Some examples: Harvey for law or Github Co-Pilot for coding.

“The people who know how to use AI tools will get the raises,” says roon, who also happens to be a great source for funny AI-related tweets.

Another feather in the cap of “ChatGPT won’t replace you just yet” is the abiding desire of humans to have other humans in the loop. As Roderick Kramer, a social psychologist at Stanford University, has noted, “we’re social beings from the get-go: We’re born to be engaged and to engage others, which is what trust is largely about. That has been an advantage in our struggle for survival.”  Beginning with the first time we lock eyes with our mothers and begin to mimic their expressions, we crave and cultivate the security that comes with human contact. Mollick points me to two pieces of research showing backlash against AI recommendations in HR and medical settings, even if said recommendations were potentially beneficial.

Attitudes adapt, though. Based on the embarrassing photos of me floating online, our general willingness to put personal information online is probably higher now than it was two decades ago. And the idea of summoning a stranger’s car or sleeping in a stranger’s spare bedroom didn’t sound like a $50 billion concept two decades ago.

So, do I think ChatGPT can do my job? Its ideation skills and first drafts are scary good. Just to be safe, I’m workshopping hours of interpretive stand-up comedy material.

More From Bloomberg Opinion:

• Is ChatGPT the Start of the AI Revolution: Editorial

• Google Faces a Serious Threat From ChatGPT: Parmy Olson

• ChatGPT Could Make  Democracy Even More Messy: Tyler Cowen

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Trung Phan is the co-host of the Not Investment Advice podcast and writes the SatPost newsletter. He was formerly the lead writer for the Hustle, a tech newsletter.

More stories like this are available on bloomberg.com/opinion


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *