You are here
- Home
- Latest thinking
- Exploring the Promise and Perils of Generative AI in the Enterprise: AI Series
Exploring the Promise and Perils of Generative AI in the Enterprise: AI Series
We explore the potential use cases for generative AI as well as the risks associated with bringing this technology within an enterprise’s business model.
We held a panel discussion on Exploring the Promise and Perils of Generative AI in the Enterprise as part of our Artificial Intelligence Series, where we provided insight into the opportunities and risks associated with the use of generative AI.
Our panel consisted of leading experts in the AI space including Digital Law Lead APAC at Herbert Smith Freehills, Susannah Wilkinson, Head of Innovation for Customer Engineering at Google, Scott Thomson, and AI Governance Product Lead at Red Marble AI, Bronwyn Ross. The discussion was hosted by TMT Melbourne Partner Julian Lincoln.
During the event, we explored the potential use cases for generative AI as well as the risks associated with bringing this technology within an enterprise’s business model.
- The Use of Generative AI
Broadly, generative AI has the ability to increase productivity and accurate decision-making at work through efficient content creation, but with this new ability comes a vast array of new challenges. For example, the data set used to train large language models may give rise to a range of concerns regarding intellectual property rights, privacy and data protection, as well as bias and discrimination. In terms of the output which is generated by AI, a new range of issues emerge. - Ethical Considerations and Responsible AI
The panellists noted that it is important for enterprises adopt an ethical and responsible AI framework for the use of AI, which must involve a measured risk assessment for each use case. This might involve additional transparency to consumers about the way that generative AI is used and a level of explainability to ensure decisions made by the AI system are accurate. The panel notes that it is essential for a person affected by decisions made by AI to be able to have their outcome reviewed because these models, like human brains, require checks and balances. For this reason, enterprises must articulate rules around the use of AI within their business models and embody these rules in their processes because the unfettered use of AI is a problem. - Mitigation of Risks
We also touched on how enterprises can mitigate the risks of generative AI while incorporating large language models into their business. Prompt tuning was noted as an effective way to hone a large language model for a specific use case, but this requires a high level of machine learning governance and safety. For those employees worried about AI replacing human jobs, the panel noted that AI is not the solution to enterprise or industry problems but rather a vehicle to help achieve a solution. Overall, enterprises must ensure they are using generative AI tools in a safe and responsible way.
Watch
The panel discussion runs for approx. 1 hour, followed by an interesting Q&A for 30 minutes
Need CPD Points?
Access our on-demand platform to watch this video and gain a CPD point in Practice Management.
See how we help our clients in
Technology, Media and Telecommunications
Key Contacts
Legal Notice
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2023