Stephen Wu to Speak at the Global Artificial Intelligence Conference

On January 19, 2017, SVLG attorney Stephen Wu will present a program at the Global Artificial Intelligence Conference entitled “Product Liability Issues in AI Systems.”  The talk will focus on product liability risks to companies providing AI-based products and services.  It will cover the sources of legal risk to manufacturers, and how manufacturers can manage those risks.  Many in the industry consider liability to be a chief obstacle to the widespread deployment of AI systems.  Nonetheless, it is possible to implement design practices and procedures to minimize and manage legal risk.

In preparation for the conference, Steve Wu addressed some questions posed by the conference organizers.  Some of the conference’s questions and Steve Wu’s answers are below.

Q.  Where are we now today in terms of the state of artificial intelligence, and where do you think we’ll go over the next five years?

A.  I believe that state legislators will begin to grapple with drones and autonomous vehicles first. This legislation will, in the short run, provide point solutions to specific legal issues. The law of product liability regarding AI and robotic systems will develop through case law. Eventually, we will have more comprehensive legislation and efforts to compile and harmonize the law of AI and robotics.

Q.  There is a negative perception around AI and even some leading technology folks have come out against it or saying that it’s actually potentially harmful to society. Where are you coming down on those discussions? How do you explain this in a way that maybe has a more positive beneficial impact for society?

A.  I agree that a failure of AI companies to develop safe products and services would endanger the public. Nonetheless, I believe in the short run, the risk from AI is not malicious conduct by machines, but rather careless design. Moreover, I do not believe that AI is inherently harmful.

As I will explain in the talk, AI governance is becoming a key issue in this decade. It will become an even greater issue in the next decade. In the talk, I will go over ways that companies developing, offering, and deploying AI can manage the risk of harm to purchasers and the public. I believe it is possible to manage the risks of AI in order to accrue the tremendous benefits it is likely to bring to society.

Q.  What are some of the best takeaways that the attendees can have from your “Product Liability Issues in AI Systems” talk?

A.  I will mention specific techniques that AI companies can employ to manage the risks involved with product liability. Managing risk begins with understanding the sources of legal risk: the legal system, possible claims for defective products, and the dynamics involved with product liability cases. Most importantly, manufacturers and enterprise customers can and should integrate AI risk management into their practices to provide and deploy safe products. They should resist the temptation to rush products into production without considering the real risks to their own businesses and the public. In the future, accidents caused by defective AI may end up harming and killing people, ruining the reputations of their companies, and causing personal problems, such as the loss of a job or personal liability.

Q.  Any closing remarks?

A.  People see legal issues involved with the sale and deployment of AI as one of the key obstacles to the commercialization of AI and robotics products. To put it simply, manufacturers do not want to go out of business because of ruinous liability suits. During the conference, I hope to shed light on the risks and provide ideas to companies to help them manage those risks. If we can manage the risks of AI, we will be able to keep the public safe while allowing the technology to meet its fullest potential.

For more information on the conference or this post, please contact Stephen Wu by completing the web form here.  A discount for conference attendance is available.  The registration page for the conference is here.

Contact Information