Johanna Ambrosio
Contributing writer

Prepare for generative AI with experimentation and clear guidelines

feature
May 15, 202310 mins

Figure out your most probable use cases and get the tech into usersโ€™ hands, with guardrails. Expect to adapt your business processes as the technology matures.

chatbot generative ai by the kong via shutterstock
Credit: The KonG / Shutterstock

Generative AI is catching on extremely quickly in the corporate world, with particular attention from the C-suite, but itโ€™s still new enough that there arenโ€™t any well-established best practices for deployment or training. Preparing for the technology can involve several different approaches, from conducting pilot projects and lunch-and-learns to forming centers of excellence based around experts who teach other employees and act as a central resource.

IT leaders may remember how, in the past 10 years or so, some user departments ran off to the cloud and made their own arrangements for spinning up instances of software โ€” then dumped the whole mess into ITโ€™s lap when it became unmanageable. Generative AI can make that situation look like childโ€™s play, but there are strategies for starting to manage it ahead of time.

โ€œItโ€™s remarkable how quickly this set of technologies has entered the consciousness of business leaders,โ€ says Michael Chui, a partner at consulting firm McKinsey. โ€œPeople are using this without it being sanctioned by corporate, which indicates how compelling this is.โ€

Ritu Jyoti, group vice president of worldwide AI research at IDC, says the drive to adopt generative AI is coming from the top down. โ€œThe C-suite has become voracious AI leaders. Itโ€™s now mainstream, and theyโ€™re asking tough questions of their direct reports.โ€ Her bottom line: Embrace generative AI, set up a framework for how to use it, and โ€œcreate value for both the organization and employees.โ€

Getting all that done wonโ€™t be easy. Generative AI comes with plenty of risks โ€” including incorrect, biased, or fabricated results; copyright and privacy violations; and leaked corporate data โ€” so itโ€™s important for IT and company leaders to maintain control of any generative AI work going on in their organizations. Hereโ€™s how to get started.

Decide which use cases to pursue

Your first step should be deciding where to put generative AI to work in your company, both short-term and into the future. Boston Consulting Group (BCG) calls these your โ€œgoldenโ€ use cases โ€” โ€œthings that bring true competitive advantage and create the largest impactโ€ compared to using todayโ€™s tools โ€” in a recent report. Gather your corporate brain trust to start exploring these scenarios.

Look to your strategic vendor partners to see what theyโ€™re doing; many are planning to incorporate generative AI into software ranging from customer service to freight management. Some of these tools already exist, at least in beta form. Offer to help test these apps; it will help teach your teams about generative AI technology in a context theyโ€™re already familiar with.

Much has already been written about the interesting uses of todayโ€™s wildly popular generative AI tools, including ChatGPT and DALL-E. And while itโ€™s cool and fascinating to create new forms of art, most businesses wonโ€™t need an explainer of how to remove a peanut-butter-and-jelly sandwich from a VCR written in the style of the King James Bible anytime soon.

Instead, most experts suggest organizations begin by using the tech for first drafts of documents ranging from summaries of relevant research to information you can insert into business cases or other work. โ€œAlmost every knowledge worker can have their productivity increased,โ€ says McKinseyโ€™s Chui.

In fact, McKinsey ran a six-week generative AI pilot program with some of its programmers and saw double-digit increases in both code accuracy and the speed of coding.

Jonathan Vielhaber, director of information technology at contract-research firm Cognitive Research Corp. (CRC), is using ChatGPT-3 to examine security issues including how to test for different exploits and the advantages, challenges, and implementation guidelines for adopting a new password manager. He does some wordsmithing to make sure the result is in his own style, and then drops the information into a business case document.

This approach has saved him two of the four hours needed to create each proposal โ€” โ€œwell worthโ€ the $20/month fee, he says. Security exploits in particular โ€œcan get technical, and AI can help you get a good, easy-to-understand view of them and how they work.โ€

Let your users have at it

To help discern the applications that will benefit the most from generative AI in the next year or so, get the technology into the hands of key user departments, whether itโ€™s marketing, customer support, sales, or engineering, and crowdsource some ideas. Give people time and the tools to start trying it out, to learn what it can do and what its limitations are. And expect both sides of that equation to keep changing.

Ask employees to apply generative AI to their existing workflow, making absolutely sure nobody uses any proprietary data or personally identifying information about customers or employees. When you supply data to many generative AI tools, they feed the data back into their large language models (LLMs) to learn from it, and the data is then out in the ether.

Track whoโ€™s doing what so teams can learn from each other and so you understand the bigger picture of whatโ€™s going on in the company.

Now that CRCโ€™s Vielhaber is a paying ChatGPT customer, he plans to implement lunch-and-learn sessions in his company to help introduce generative AI to others and allow them to โ€œsee what the possibilities are.โ€

Start training your employees

Depending on what your long-term goals are for the technology, you might need to plan for more formal means of spreading the knowledge. IDCโ€™s Jyoti is a big fan of the center-of-excellence approach, where a central group can train different employees or even embed in various business units to help them adopt generative AI most effectively.

New types of jobs might be needed down the road, from a chief AI officer to AI trainers, auditors, and prompt engineers who understand how to create queries tailored for each generative AI tool so you get the results you want.

Hiring generative AI experts will not be easy as they become more in demand. You will need to look to recruiters and job boards, attend AI-focused conferences, and build relationships with local colleges and universities. You might decide itโ€™s in your companyโ€™s best interest to create your own LLMs, fine-tune ones already available from vendors, and/or host LLMs in-house to avoid security problems. All those options will require more technical experts as well as additional infrastructure, according to the BCG report.

Geetanjli Dhanjal, senior director of consultancy Yantra, is expanding her firmโ€™s AI practice. Sheโ€™s focusing on cross-skilling existing employees, hiring external resources, and putting recent college grads through โ€œenablementโ€ programs that include data science, web-based training, and workshops. Sheโ€™s building out centers of excellence in both India and California and says that makes it โ€œeasier to hire local talentโ€ in both regions.

And remember to talk to your employees about how their careers may change as a result. Even now, AI can conjure up fears about specific jobs going away. One analogy McKinseyโ€™s Chui uses is to spreadsheets. โ€œWe still use them, but now we have analysts who are modeling data instead of calculating,โ€ he says. Programmers using generative AI, for instance, can concentrate on improving code quality and ensuring security compliance.

When AI creates first drafts, humans are still needed to check and refine content, and seek out new types of customer-facing strategies. โ€œTrack employee sentiment,โ€ the BCG report advises. โ€œCreate a strategic workforce plan and adapt it as the technology evolves.โ€

Itโ€™s a two-way street, Dhanjal says. โ€œWe have to support employees with training, resources, and the right environment to grow.โ€ But individuals also need to be open to change and to cross-skilling in new areas.

Be careful out there

As important as it is to jump in, itโ€™s also critical to maintain some perspective about the risks of todayโ€™s tools. Generative AI is prone to a phenomenon known as โ€œhallucinations,โ€ where, in the absence of enough relevant data, the tool simply makes up information. Sometimes this can yield amusing results, but itโ€™s not always obvious โ€” and your corporate lawyers may not find it so funny.

Indeed, generative AI โ€œcan be wrong more than itโ€™s right,โ€ says Alex Zhavoronkov, CEO of Insilico Medicine, a pharmaceutical and AI firm that has based its business model around generative AI. But unlike most companies, Insilico uses 42 different engines to test the accuracy of each model. In the broader world, โ€œyou can sacrifice accuracy for snappinessโ€ with some of todayโ€™s consumer-oriented generative AI tools, he says.

In February, Insilico received Phase 1 approval from the US Food and Drug Administration for an AI-generated molecule used as the basis of a medication to treat a rare lung disease. The company cleared that first phase in under 30 months and spent around $3 million, versus traditional costs of around 10 times that amount, Zhavoronkov says. The economic benefits of using generative AI mean the company can target other rare illnesses, also called โ€˜orphanโ€™ diseases, where most pharma companies have been reluctant to invest, as well as conditions endured by broader segments of society.

The company uses its own highly technical tools, in the hands of chemists and biophysicists and other experts. But interestingly, โ€œweโ€™re still cautiousโ€ about using generative AI for text generation because of inaccuracy and intellectual property issues, Zhavoronkov explains. โ€œI want to see Microsoft and Google introduce this into their software suites before I start relying on it more broadly,โ€ he says.

Vendors and researchers are working on ways to identify and bar copyrighted content from AI results, or at least alert users about the sources of the results, but itโ€™s very early going. And thatโ€™s why, at least until the tools improve, humans still very much need to be in the loop as auditors.

Get your guidelines on

In this world, ethical AI is more important than ever, says Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute. He also serves on Microsoftโ€™s CSE AI Ethics Review Board and as an ethical AI expert for the Boston Consulting Group.

โ€œResponsible AI is an accelerant to give you the ability to experiment safely and with confidence,โ€ he explains. โ€œIt means youโ€™re not constantly looking over your shoulder,โ€ and itโ€™s well worth the time to develop controls about what employees may and may not do.

โ€œSet some broad guardrails,โ€ he suggests, based around corporate values and goals. Then โ€œcapture those into enforceable policiesโ€ that you communicate to staff.

Going forward, creating guidelines for AI will be on his agenda, CRCโ€™s Vielhaber says. The company is in the process of rewriting its IT- and security-related policies anyway, and AI will be a piece of that.

โ€œI think weโ€™ve crossed a threshold in AI that will open up a lot of things in the next few years,โ€ he says, โ€œand people will come up with really ingenious ways to use it.โ€