Published on:

People sometimes file lawsuits without the assistance of an attorney at law.  People who represent themselves in a court case are called “pro se litigants.”  Complaints filed by pro se litigants are common, including in California and in Santa Clara County and San Mateo County in particular.  Imagine that your business received a complaint from a pro se litigant.  Should you ignore it because it did not come from an attorney?  You might think that a pro se case is not serious, but taking a complaint lightly could be a big mistake.

Judge Richard Posner served as a Circuit Judge of the United States Court of Appeal from 1981 until his recent retirement on September 1.  Posner’s self-published book, Reforming the Federal Judiciary: My Former Court Needs to Overhaul Its Staff Attorney Program and Begin Televising Its Oral Arguments, was also released earlier this month.

In an interview with the New York Times, Posner opined that most judges regard pro se litigants as “not worth the time of a federal judge” and that Judges in the 7th Circuit (in the Midwest) generally rubber stamp recommendations of staff lawyers who review pro se appeals.   In addition, Posner said he was prevented from giving pro se litigants a better chance by reviewing the staff attorney memos before they were circulated to judges.

Published on:

SVLG Shareholder Stephen Wu will host a conference call program on the recent Equifax data breach on October 25, 2017 at 10 am Pacific/1 pm Eastern. While the Equifax is not the largest ever in terms of the total number of records affected, by some estimates, it affected about half of the population in the United States. With a breach that large, legislators and regulators are considering what new policies may help to prevent future large-scale breaches.

For businesses that create, receive, maintain, and transmit personal data, the Equifax breach raises the question of what changes are necessary to keep up with evolving data security threats. According to news reports, the breach occurred because of a failure in patch management — a failure to implement a publicly available patch to a known security vulnerability for a period of months. Are there emerging threats that warrant changes in patch management practices? Or did the Equifax breach occur because of the company’s failure to take care of the basic patch management steps. We will explore these questions in this program.

The program will generally explore the technical and legal ramifications of the breach.  What are the prospects for liability? What compliance challenges does the breach highlight? Are there changes in documented practice and procedure that the breach would suggest?

Published on:

If your business provides services to healthcare providers or health insurance companies, your business may have data privacy and security requirements under a federal law called “HIPAA” (the Health Insurance Portability and Accountability Act). If your business offers an online service or application, the first time you may have heard of HIPAA is when your potential customer asks you to sign a “business associate agreement.” Even if you don’t sign a business associate agreement, you may have compliance obligations under HIPAA. And if you fail to comply with HIPAA, you may face penalties and liabilities for violations.

Health records are among the most sensitive sets of information about us. The results of an unauthorized disclosure of health records could be devastating. Leakage of health records could lead to victims’ embarrassment, stigma, job loss, and even identity theft. Following concerns about the privacy and security of health records in the 1990s, the public began to demand protection to ensure that the healthcare industry would implement controls over what information was gathered from patients, how the information could be shared, and the secure management of that information. When Congress overhauled the healthcare laws and called for greater use of electronic transactions, Congress was aware of the need for protections over the privacy and security of health information.

The need for simplifying the administration of healthcare, coupled with a public concern over privacy and security, prompted Congress to include requirements for privacy and security in landmark healthcare legislation enacted in 1996. The 1996 legislation, called the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”),[1] has had a broad impact on the healthcare industry since its enactment, transforming practices for creating, storing, managing, transmitting, and disclosing health information in the United States. Later, Congress passed the Health Information Technology for Economic and Clinical Health Act, also called the “HITECH Act.”[2]

Published on:

On September 7, 2017, Equifax Inc. announced a security breach involving the compromise of sensitive personal information of approximately 143 million U.S. consumers. Attackers compromised Social Security numbers, birth dates, addresses, driver’s license numbers, and credit card numbers. This is the kind of information that could help attackers engage in identity theft. This isn’t the largest breach ever in terms of the number of unique accounts; the Yahoo breach involved approximately 1.5 billion accounts. Nonetheless, the fact that the number of affected individuals is approaching half of the U.S. population and involves sensitive information that could be used for identity theft, the Equifax breach is far more concerning than the Yahoo breach.

What caused the breach? The Apache Software Foundation reported on September 9 that attackers compromised Equifax’s systems by exploiting a vulnerability in the Apache Struts Web Framework. It appears that Equifax failed to implement an update that would have prevented the attack. Thus, WIRED magazine is reporting that the Equifax breach was entirely preventable.

The steady stream of news about data breaches emphasizes the importance of rigorous enterprise security programs. The consequences for the breached company are enormous. Companies sued for data breaches are paying staggering amounts to investigate and settle the cases against them. For instance, The TJX Companies set aside $107 million to cover the litigation against it and regulatory actions. Heartland Systems set aside $73.3 million for breach expenses in 2009. The loss of sales, reputation, profit, and ultimately shareholder value may bring a company to its knees. At the time of the Sony breach, for instance, the company’s entire information technology infrastructure was down in order to mitigate the effect of the attack against it. Workers were using personal devices to continue conducting company business.

Published on:

On January 19, 2017, SVLG attorney Stephen Wu will present a program at the Global Artificial Intelligence Conference entitled “Product Liability Issues in AI Systems.”  The talk will focus on product liability risks to companies providing AI-based products and services.  It will cover the sources of legal risk to manufacturers, and how manufacturers can manage those risks.  Many in the industry consider liability to be a chief obstacle to the widespread deployment of AI systems.  Nonetheless, it is possible to implement design practices and procedures to minimize and manage legal risk.

In preparation for the conference, Steve Wu addressed some questions posed by the conference organizers.  Some of the conference’s questions and Steve Wu’s answers are below.

Q.  Where are we now today in terms of the state of artificial intelligence, and where do you think we’ll go over the next five years?

Published on:

Healthtech devices are increasingly common. People are wearing sensor devices that monitor fitness metrics. They can count steps and distance walked or run, calories burned, elevation changes, and heart rate. In the future, people may swallow sensor devices that can monitor or transmit video of the digestive system, may have sensor devices in their bloodstream monitoring the level of a medication, or may ingest smart pills that detect diseases. An organization can also embed devices in a patient, such as a catheter for an insulin pump, a pacemaker, or microchips placed under the skin.

With all of these devices, various security vulnerabilities may be present. Hackers can exploit vulnerabilities to take control of them or otherwise tamper with them. Devices that communicate with systems outside the body entail the risk of interception or interruption. Moreover, once systems collect data from devices on or in the body, the systems are potential targets for attack.

To mitigate these risks, the device manufacturers should design their products with security features in mind. They should thoroughly test the device during the design phase to determine if vulnerabilities pose risks to users. They should have an independent third party test the device to check for vulnerabilities and seek any available security certifications for the device. Finally, the vendor hosting the applications and data needs to secure the data and the systems collecting the data. It should use transmission security procedures and technology to secure the communications with the device, encrypt the collected data, and manage access to the infrastructure supporting the devices.

Published on:

This morning, on September 20, 2016, the U.S. Department of Transportation issued long-awaited guidance on automated vehicles (AV). DOT took a flexible approach of “guidance” and does not intend the document to be the last word on autonomous driving. Rather, it seeks to create a framework and process that will inform future DOT action. It’s interesting that DOT raises the possibility of pre-market approval, and an appendix uses FAA authority as an analog. Finally, DOT expressly raises the ethical issues involved in AV programming, although it does not seek to take a definitive position on them.

The DOT guidance document is linked here:

https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf

Published on:

Silicon Valley Law Group is pleased to announce the publication of Attorney Stephen S. Wu’s new book: “A Guide to HIPAA Security and the Law – Second Edition.” The American Bar Association published his book last month. The book provides detailed information about healthcare information technology security legal requirements and how covered entities and business associates can comply with them.

Also, please join us for a special Meetup presentation, in which Steve Wu will share his thoughts on an important topic covered in one of his book’s chapters: the impact of emerging technologies on HIPAA security compliance. The program is on September 28, 2016 at 10:00 a.m. Pacific Time at SVLG’s offices. A dial-in is available for those unable to attend in person.

The Department of Health and Human Services issued the HIPAA Security Rule in 2003 to impose information technology security requirements on HIPAA covered entities:  healthcare providers, health plans, and healthcare clearinghouses.  Later legislation and regulation also imposed HIPAA security requirements on various “business associates” of these covered entities.  Despite some changes in coverage and the breach notification rule, the core HIPAA security requirements have remained unchanged since 2003.  Nonetheless, technology trends such as cloud computing, social media, and mobile computing required applying the existing rules to new technologies.  Moreover, we are now facing dramatic and sweeping changes with augmented and virtual reality systems, Big Data, 3D printing, healthtech, the Internet of Things, robots, and artificial intelligence systems.
Published on:

by David Duperrault

On December 12, 2015, delegations from 195 nations reached consensus on a historic climate accord. The Paris Agreement was the culmination of a process that began in Rio de Janeiro in 1992, when the United Nations Framework Convention on Climate Change (UNFCCC) was created.  The three central objectives of the Paris Agreement are:

(a) Holding the increase in the global average temperature to well below 2°C (3.6°F) above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C (2.7°F) above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;