Cyber security affects all businesses and industries and is a Board level agenda item.
Our quarterly article provides a roundup of best practice, news and legislative developments concerning cyber security in Europe, Asia, Australia and the USA.
On 1 December 2017, the High Court handed down its judgment on the UK's first class action arising from a data breach (Various Claimants v Morrisons). The High Court allowed the claim and deemed Morrisons to be vicariously liable for the criminal actions of a former employee.
In July 2015, Andrew Skelton (a former Morrisons' employee) was sentenced to eight years in jail after he was found guilty of stealing and unlawfully sharing the names, addresses, bank, salary and national insurance details of almost 100,000 of his former colleagues with news outlets and data sharing websites. Morrisons then reportedly spent more than £2 million on measures to tackle the breach.
Almost 6,000 of those affected recently brought a class action, despite not having suffered any financial loss, on the basis that Morrisons was liable, directly or vicariously, for:
(i) the criminal action of its rogue employee in disclosing personal information of co-employees; and
(ii) the subsequent distress suffered by those employees;
whether in breach of certain data protection principles under the Data Protection Act 1998 ("DPA"), an action for breach of confidence, or an action for misuse of private information (a tort established in Google v Vidal Hall, discussed further below).
The judgment cleared Morrisons of direct liability as it had not breached any of the data protection principles (except in one respect which was not causative of any loss), nor could direct liability be established for misuse of private information or breach of confidentiality. This is because once Mr Skelton acted autonomously in deciding how to handle the personal data he became the data controller in respect of the relevant processing. Therefore, the acts that breached the DPA were those of a third party data controller (Mr Skelton), not Morrisons. However, it was held that the DPA does not exclude vicarious liability, despite not expressly referring to it. As Mr Skelton's disclosure of the data was deemed to be a seamless and continuing series of events it was held that Mr Skelton acted in the course of his employment and Morrisons was therefore vicariously liable for Mr Skelton's actions. The judgment also stated that this conclusion would be the same regardless of whether the basis of Skelton's liability was seen as a breach of duty under the DPA, a misuse of private information or a breach of confidence.
Google v Vidal-Hall
The recent judgement follows the landmark case of Google v Vidal-Hall in March 2015 which established the right to damages for emotional distress for breach of the DPA, including in the absence of any financial loss or other material damage. The principle of damages for emotional distress was established on the basis that section 13(2) of the DPA (which essentially required a claimant to establish actual financial loss before being able to claim compensation for data protection breaches) was incompatible with Article 23 of the EU Data Protection Directive. This meant that it should therefore be disapplied in accordance with the 'Marleasing' principle (to interpret national legislation "as far as possible" in light of the wording and purpose of the directive to achieve the result sought by the directive). It was also disapplied on the grounds that it conflicts with the rights guaranteed by the EU Charter of Fundamental Rights. Google v Vidal-Hall also recognised the misuse of private information as a tort. Prior to the case, the courts had used the law of confidentiality to afford appropriate protection to privacy rights under Article 8 of the European Convention of Human Rights. Therefore, recognising the misuse of private information as a tort did not create a new cause of action, but gave the correct label to an existing cause of action.
Implications for organisations
The Morrisons judgment establishes vicarious liability for data breach, in addition to direct liability, which could have significant implications for organisations. Not only are organisations liable for the distress caused by a data breach, even in the absence of financial loss, but they are now also potentially liable for the way that their employees access and handle data.
Where large scale data breaches are an almost weekly occurrence, it seems possible to imagine that such breaches could result in more compensation claims being brought from large numbers of individuals affected, even where they have not suffered financial loss. Whilst individuals may not themselves be entitled to significant sums, if the data breach affected tens or hundreds of thousand individuals, the total potential compensation liability for organisations could become relatively large.
With the GDPR applying from May 2018, the maximum fines that can be levied by regulators is very significantly increasing (in the UK from the £500,000 maximum fine the ICO can presently levy, up to a maximum of 4% of global turnover or €20 million for certain breaches, whichever is greater). It therefore remains to be seen whether damages to data subjects also increases, but the additional weight placed by regulators on data protection is likely to raise the profile of such claims. Also, given the requirements of the GDPR are stricter in some places than under the DPA, the risk of non-compliance is greater. That is without taking into account the reputational damage such incidents can also bring.
In giving the judgment, Justice Langstaff stated his concerns that the wrongful acts of Skelton were deliberately aimed at Morrisons, such that by finding Morrisons vicariously liable, the Court could be regarded as "an accessory to furthering his criminal aims". As a result, he granted leave to Morrisons to appeal the conclusion on vicarious liability, but would not, without further persuasion, grant permission to cross-appeal his conclusions as to direct liability. Morrisons has since confirmed its intention to appeal the decision, so it remains to be seen whether this judgment will stand.
The full 'Various Claimants v WM Morrisons Supermarket PLC' judgment can be found here.
Formal DCMS response awaited by the end of the year on consultation to implement the Cyber Security Directive in the UK
The public consultation issued by the UK Department for Digital, Culture, Media & Sport on implementing the EU Network and Information Security Directive (“Cyber Security Directive”) into national legislation closed on 30 September 2017 (the “Consultation”).
The Consultation sets out the UK Government’s planned approach for implementing the Cyber Security Directive, along with a series of questions on a range of detailed policy issues relating to the implementation. It seeks to obtain views from industry, regulators and other interested parties on the proposed plans. The Government is currently analysing feedback and a formal response is expected in December 2017 (within ten weeks of the consultation closing date). The Government has also confirmed its intention for the implementing legislation to continue to apply in the UK post-Brexit (refer to our previous related article for further detail).
The so-called Cyber Security Directive was adopted by the European Parliament on 6 July 2016. Member States have until 9 May 2018 to transpose the directive into domestic legislation and it will apply from 10 May 2018. The Cyber Security Directive intends to provide legal measures to boost the overall level of cyber security in the EU, by:
ensuring that Member States have in place a national framework to support and promote the security of network and information systems, consisting of a National Cyber Security Strategy, a Computer Security Incident Response Team (“CSIRT”), a Single Point of Contact (“SPOC”) and a national competent authority (or authorities) in respect of network information security;
setting up a co-operation group to support and facilitate strategic cooperation and the exchange of information among Member States; and
ensuring the framework for security of network and information systems is applied effectively across sectors which are vital for the economy and society and those that rely heavily on information networks, including energy, transport, water, healthcare and digital infrastructure sectors.
Businesses in these sectors that are identified by Member States as “operators of essential services” will have to take appropriate and proportionate security measures to manage risks to their network and information systems and notify serious incidents to the relevant authority.
Key “digital service providers” (e.g. search engines, cloud computing services and online marketplaces) will also have to comply with security and incident notification requirements established under the Cyber Security Directive.
Some of the key elements proposed by the Government in the Consultation include:
Sanctions regime: An approach similar to that of the General Data Protection Regulation (the “GDPR”) to provide consistency with the Government’s overall regulatory approach towards cyber security. Member States are required to lay down rules on penalties that apply for breaches of the national provisions – these must be effective, proportionate and dissuasive. The two tier bands proposed in the Consultation comprise:
Tier 1: a maximum of €10 million or 2% of global turnover (whichever is greater) for lesser offences (such as failure to cooperate with the competent authority and failure to report a reportable incident); and
Tier 2: a maximum of € 20 million or 4% of global turnover (whichever is greater) for failure to implement appropriate and proportionate security measures.
The Information Commissioner (the “ICO”) published her response to the Consultation on 29 September 2017. In her response, she concurred with the Government’s intention to align the penalty regime with the GDPR but advised that further clarity was required on this alignment. The ICO also advised the Government to take into account the guidelines on administrative fines that were published by the Article 29 Working Party in October 2017.
The Government provides some comfort that financial penalties will only be levelled as a last resort where it is assessed that appropriate risk mitigation measures were not in place without good reason and acknowledges that the maximum fines would only be appropriate in the most “egregious incidents”. However, interested parties have commented that the proposed regime seems disproportionate compared to other regimes elsewhere in Europe, for example, in Germany the IT Security Act intends to levy a maximum fine of up to € 50,000 for any breach of security and reporting obligations and a maximum fine of €100,000 for non-compliance with a direct order from the German regulator.
Operators of essential services (“OES”): A proposed approach to identify these operators using four criteria which are set out in an annexure, namely: the sector (the broad part of the UK economy); subsector (specific elements within an individual sector); essential service (the specific type of service) and identification thresholds to identify essential operators (e.g. through size or impact of the events sought to be prevented).
The proposed thresholds are stated to be “at such a level so as to capture only the most important operators in each sector based on potential of a disruption to their essential service resulting in what the government considers would be a significant disruptive effect” with separate thresholds to be established for incident reporting. The Government has attempted to make these criteria as clear as possible to allow operators to determine whether they need to comply with the directive.
The Consultation also acknowledges that the banking and financial market infrastructure sectors within scope of the Cyber Security Directive will be exempt from certain aspects of the legislation where provisions at least equivalent to those in the directive will already exist by the time the directive comes into force. The identifying process for operators of essential services is one such example. Firms and financial market infrastructure within these sectors must continue to adhere to requirements and standards set by the Bank of England and/or the Financial Conduct Authority.
Service providers not caught by the thresholds in the annexure may still be subject to the proposed security measures. The Consultation also proposes a reserve power for the Government (or relevant competent authority) to designate specific operators in the implementing regulations, even though they are outside of the thresholds. This limited power is envisaged to apply where there are valid reasons on the grounds of national security, a potential threat to public safety or the possibility of significant adverse social or economic impact resulting from a disruptive incident.
Digital Service Providers (“DSPs”): Proposed definitions for each of online marketplace, online search engine and cloud computing services.
Competent Authority: A proposal to nominate multiple sector-based competent authorities to be responsible for implementing the Cyber Security Directive (rather than a single national competent authority). Whilst a balance is necessary between expertise in the security of network and information systems (which a single authority may more easily develop), the Consultation acknowledges that this needs to run alongside ensuring the nominated authority has a detailed understanding of the individual sectors and their associated challenges, something which multiple competent authorities may more easily facilitate. The Consultation sets out a table of proposed competent authorities divided by sector, with the ICO proposed as the competent authority for DSPs, for example.
Where operators provide services in more than one sector and therefore fall under the remit of more than one competent authority, the Consultation confirms that the relevant authorities will be encouraged to cooperate and provide consistent advice and oversight. The same approach is encouraged where an incident crosses regulatory boundaries (e.g. a NIS incident that also involves the loss of personal data). In these circumstances, the ICO’s response clarifies that any requirement to notify the NCSC about a breach under the Cyber Security Directive will not satisfy the requirement to inform the ICO of data breaches where required under the GDPR, which will need to be reported separately.
Security requirements for operators of essential services: A guidance and principle based approach to implement the security requirements set out in the Cyber Security Directive. The Government’s proposed high level security principles are set out in an annexure to the Consultation and include a principle specifically to address supply chain protection i.e. so that an organisation understands and manages security risks to the network and information systems supporting the delivery of essential services arising from dependencies on external suppliers. This includes ensuring that appropriate measures are employed where third party services are used. The principles will be complemented by more detailed guidance (including sector specific guidance). The Government proposes a similar principles and guidance based approach to security measures for DSPs with the aim of ensuring the guidance is as close to the ENISA guidance as possible (refer to our related article here for further detail on the ENISA guidance).
Incident reporting for operators of essential services: A proposal for how to define an incident for the purpose of the Cyber Security Directive, thresholds for determining whether an incident has a significant impact and the timeframe within which an incident must be reported. The Government states its aim to align reporting requirements under the Cyber Security Directive with existing arrangements where possible. A similar strategy is set out in respect of DSPs. All reporting is proposed to be to the NCSC, as the dedicated CSIRT for the purpose of the directive.
The Government considers knowledge of threats and incidents to be an important part of understanding risks and mitigating possible threats. It has therefore also proposed voluntary reporting of incidents that do not meet the specified thresholds - such as where operators have to take action to maintain supply, provision, confidentiality or integrity of the service. Whilst this voluntary reporting will not subject the OES to increased liability, the competent authority will expect an OES to respond to these incidents as part of their duty to ensure that appropriate risk-management measures are in place to mitigate the impact of any adverse incident.
Whilst the Cyber Security Directive simply states the need to notify an incident “without undue delay”, it is common to set a maximum period in which companies have to report. The Government seeks to align this with the requirements of the GDPR by suggesting “without undue delay and as soon as possible, at a maximum no later than 72 hours after having become aware of an incident”. However, given the slight differences in the drafting used when compared to the corresponding GDPR provision, the ICO has commented that a direct transposition of the equivalent provision would more readily achieve such alignment – not least inclusion of the words “where feasible” with reference to the time period. Where existing arrangements for incident reporting relating to loss of supply of critical / essential service exist and are of a shorter time frame, these will remain in place.
In the meantime, on 13 September 2017 the European Commission also published a draft implementing regulation in respect of the Cyber Security Directive which will no doubt feed into the DCMS’s forthcoming formal response to the Consultation. Once finalised the EU implementing regulation (including some of the thresholds and other tests for what constitutes a “substantial” incident for Digital Service Providers) will have direct effect while the UK remains in the EU, but not afterwards. It remains to be seen whether any further national legislation such as the Great Repeal Bill will seek to mirror the requirements of the EU implementing regulation (including the test for whether there is an impact on users in the remaining Member States of the EU).
The DCMS Consultation document can be found here.
The draft European Commission Implementing Regulation can be found here.
Cyber security is becoming highly relevant in the context of arbitration proceedings, a key component of which is their confidentiality. Parties, arbitrators, counsel and institutions are all vulnerable to cyber-attacks, the consequences of which would undoubtedly be very serious.
The potential impact of a cyber security breach on arbitration proceedings has catalysed the creation of a new working group in November 2017 that will consider the impact of cyber security breaches on international arbitration and current practice and duties. The working group consists of members from the International Council for Commercial Arbitration (ICCA), the New York City Bar Association and the International Institute for Conflict Prevention & Resolution.
The working group will aim to create cyber security guidelines for counsel, arbitrators and institutions as well as protocols that are optional and can be adopted by parties in arbitration. The guidelines and protocols will then be consulted on at the 2018 ICCA Congress which will be held in Sydney.
For more information please see the ICCA press release here.
In the run up to the GDPR applying from next year, there has been a variety of practical guidance for compliance at the European level through the Article 29 Working Party (“WP29”) (which reflects the consolidated view of national supervisory data protection authorities in each member state) and at the national level through the UK Information Commissioner’s Office (“ICO”).
Most recently, in October 2017 the WP29 published guidelines on (i) personal data breach notification requirements; (ii) automated individual decision-making and profiling, and (iii) the application and setting of administrative fines. The WP29 has also adopted guidelines on the right to data portability, data protection officers, lead supervisory authorities and on data protection impact assessments.
The ICO has issued a range of guidelines to assist organisations with compliance as well, including a constantly evolving “Overview of GDPR” which is intended to form the ICO’s guide to the GDPR. More recently, the ICO has also issued guidance on: (i) contracts and liabilities between controllers and data processors; and (ii) consent, plus a discussion document on profiling. Subsequent guidance is expected in the run-up to the application of the GDPR.
The draft guidance on contracts and liabilities between controllers and processors sets out the ICO’s interpretation of the GDPR and its general recommended approach to compliance and good practice. This is of particular importance to UK organisations, given that written contracts between controllers and processors are now required under the GDPR rather than being, as they were formerly, the way of demonstrating compliance with the seventh principle of the Data Protection Act 1998 (regarding appropriate security measures). These contracts must include certain mandatory contractual provisions, as a minimum. The terms are designed to ensure processing meets the GDPR’s requirements (including beyond just keeping personal data secure).
The draft guidance clarifies a number of issues relating to mandatory contractual terms and gives practical advice regarding how they should be drafted (see below). The guidance makes it clear that contracts must contain specific details about the data processing being carried out, including subject matter, length and purpose of the processing, the categories of data subjects involved and the types of data being processed. Relevant contracts should be updated to remove generic terms used to describe this information, as the guidance makes it clear they will not be acceptable.
The draft guidance also confirms that contracts must require the processor to tell the controller immediately if it is asked to do something infringing the GDPR or other data protection law within the EU or a Member State. It was previously unclear from the drafting of the GDPR whether this term would be required.
For the first time, the GDPR imposes direct statutory responsibilities and liabilities on processors, outside the terms of the processor-controller contract. Processors, as well as controllers, may now be liable to pay damages or be subject to fines or other penalties. With the significant increase in the sanctions and penalties that can be imposed under the GDPR for non-compliance, the new requirements potentially give rise to a very different risk assessment and negotiating position for organisations.
The majority of the processor obligations are made clear in the GDPR and in the guidance. Processors will be subject to direct responsibilities:
to ensure the security of its processing;
not to use a sub-processor without the prior written authorisation of the controller;
to co-operate with supervisory authorities;
to keep records of processing activities;
to employ a data protection officer (if required); and
to appoint (in writing) a representative within the European Union if needed.
The guidance also emphasises that controllers still have direct liability to data subjects for damage suffered regardless of the use of a processor - unless they are “not in any way responsible for the event giving rise to the damage” under Article 82(3) – a high threshold.
Many organisations already have written agreements in place to comply with the existing data protection framework. A key component of any compliance programme ought to include the review of any of these arrangements that will still be in force on 25 May 2018. In doing so it may be necessary to prioritise immediate areas to be rectified based on proportionality and risk, documenting all decisions taken. In particular, you should determine key or high risk contracts (it may be worth conducting a data privacy impact assessment to identify the latter, especially if large amounts of personal data are being transferred). Then you should review the terms to determine for example: (i) what additional provisions or information needs to be included given the wider requirements of the GDPR; and (ii) where the organisation’s liability currently sits (bearing in mind the greater statutory exposure now for both controllers and processors). This will help determine the most appropriate steps to take, whether to renegotiate agreements and the scale and scope of the exercise. In conducting any such exercise it is also a good opportunity to review the cyber security provisions and processes in any agreement more widely as well.
The GDPR allows for standard clauses from the EU Commission or a supervisory authority to be used to aid any redrafting required (none have been issued to date). The GDPR also envisages that adherence by a processor to an approved code of conduct or certification scheme may be used to help controllers demonstrate they have chosen a suitable processor. Standard contractual clauses may form part of such code or scheme, although no such schemes are currently available.
The ICO acknowledges the guidance will continue to evolve to take account of experience applying the GDPR and future guidelines issued by relevant European authorities.
ICO’s draft guidance on data controller and processor liability can be found here.
The GDPR introduces a new mandatory requirement for all controllers to notify the appropriate data protection authority of a “personal data breach” likely to result in a risk to people’s rights and freedoms, for example following a cyber-attack. This will include providing the regulator with a significant amount of information about the breach and marks a change from the present regime where notification to the ICO is not mandatory (although the ICO does already encourage notification for “serious breaches”).
The GDPR also includes a new obligation to notify the affected data subjects themselves: when a “personal data breach is likely to result in a high risk to the rights and freedoms of natural persons”. There is an exception in relation to those parts of the data which have been rendered unintelligible to unauthorised persons through the application of technical measures such as encryption or so-called “salting and hashing”.
Fines for breach of the separate fundamental requirements to implement appropriate technical and organisational security measures under Article 32(1) of the GDPR are set at the lower tier under the new sanctions regime. Article 33(5) also requires controllers to document all personal data breaches – comprising the facts of the breach, its effects and remedial actions taken – so as to enable regulators to verify compliance with the Article 32 requirements. This is in line with the accountability principle that runs through the provisions of the GDPR.
The Article 29 Working Party recently issued guidance which discusses the notification obligations and includes some worked examples of various types of breaches, including when notification is and isn’t required.
The obligation to notify without undue delay is triggered by awareness of a breach. The guidance clarifies that a controller can undertake a brief initial investigation to determine whether or not there is a breach and during this window it may be regarded as not yet being “aware”. Awareness of a processor, however, will also be deemed to be awareness of the controller (noting that the former has an obligation to notify the latter). The guidance accepts that “bundled” notifications may be appropriate for multiple similar breaches. Where a failure to notify the supervisory authority also reveals the lack of adequate security measures, there is the possibility of two sets of sanctions.
The threshold for notification of affected individuals is deliberately higher – partly to protect individuals from “notification fatigue”. Notifications should be in dedicated messages to make communication of the breach clear and transparent, rather than being tacked onto a normal communication. Multiple channels of communication may be preferable in certain circumstances to maximise the chance of properly communicating information to all affected individuals.
The Article 29 Working Party guidance on personal data breach reporting can be found here.
Cyber incidents have the capacity to cause many different types of loss. Insurance coverage exists for at least some aspects of cyber risks in the UK market. However, given the range and diversity of risks that may arise, there are some key issues for businesses to consider when it comes to insurance against cyber risks in commercial contracts. Our recent article considers these issues in more detail and can be found here.
This article was first published in the December 2017 issue of PLC Magazine.
Across the EU the cyber insurance market is growing rapidly and the adoption of the GDPR and the Cyber Security Directive are likely to further expand the market as organisations falling under the scope of this legislation seek to protect themselves. In response to the rapid growth of the cyber insurance market the European Union Agency for Network and Information Security (“ENISA”) published a report on the commonality of risk assessment language in cyber insurance on 15 November 2017. The report proposes two sets of recommendations that aim to support the evolution of language harmonisation that would facilitate the expansion of the EU cyber insurance market without stifling innovation.
This follows guidance at the national level from the Prudential Regulation Authority (“PRA”) earlier this year. The PRA issued a Supervisory Statement (the “Statement”) in July 2017 setting out its expectations of all UK non-life (re)insurance firms within the scope of Solvency II in relation to the management of their cyber insurance underwriting risk. The Statement was issued following a cross-industry consultation conducted between October 2015 and June 2016 and publication in November 2016 of the key findings of that consultation. The PRA also published a Policy Statement alongside the Statement.
The PRA says in the Statement that it “expects firms to be able to identify, quantify and manage cyber insurance underwriting risk”. Cyber underwriting risk may take one of two forms:
affirmative cyber risk, i.e. insurance policies that explicitly include coverage for cyber risk; and
non-affirmative (or ‘silent’) cyber risk, i.e. insurance policies that do not explicitly include or exclude coverage for cyber risk. The Statement says that this includes all property and casualty covers which could give rise to cyber risk exposure from physical and non-physical damage.
In order to identify, quantify and manage cyber insurance underwriting risk, firms are expected to:
introduce measures that reduce the unintended exposure to non-affirmative cyber risk with a view to aligning the residual risk with the risk appetite and strategy that has been agreed by the board. This includes making adequate capital provisions against this risk. The measures may also include adjusting the premium to reflect the additional risk and offer explicit cover; introducing policy exclusions; and/or attaching specific limits of cover;
have clear strategies on the management of cyber risks, which are owned by the board and reviewed at least annually. The strategy should include a clearly articulated risk appetite statement with both quantitative and qualitative elements, for example defining target industries to focus on, strategy for managing non-affirmative cyber risk, specifying rules for line sizes, aggregate limits for industries, splits between direct and reinsurance etc.; and
understand the continuously evolving cyber landscape and demonstrate a continued commitment to developing their knowledge of cyber insurance underwriting risk (including both affirmative and non-affirmative risk). This knowledge and understanding should be aligned to the level of risk and any growth targets the firm has in this field, and should cover all three lines of defence (business, risk management, and audit).
The PRA’s Statement will focus insurers on the need to get a better handle on their cyber exposure, particularly their ‘silent’ cyber exposure. There are currently numerous policies in the London market which include ‘silent’ cyber risk merely by virtue of the fact that cyber risks are not excluded from cover. For example, unless cyber risks are excluded, property and business interruption policies may respond to a cyber-attack which causes physical damage. Likewise, a liability policy might respond where the liability results from a cyber breach which is not excluded. It will be no small task for (re)insurers to identify, quantify and manage these ‘silent’ cyber risks. One option for (re)insurers to manage their risk is to include cyber exclusions in non-cyber insurance products. However, particularly in a soft market, it remains to be seen whether they will have the ability or appetite to do this.
We recently presented a related webinar titled "Cyber Insurance: understanding the insurance response" which can be accessed by clicking here.
The ENISA report on the commonality of risk assessment language in cyber insurance can be found here.
The PRA’s Supervisory Statement can be found here.
The PRA’s Policy Statement can be found here.
On 22 November 2017, the UK Government unveiled plans to invest £1bn in technology projects, including £400m for electric car charging points and £75m for research on artificial intelligence. This is one of a number of ways in which the Government is seeking to support this growing industry.
In addition, the Automated and Electric Vehicles Bill (the “Bill”) was announced in the Queen’s speech earlier this year. The Bill passed its second reading in the House of Commons on Monday 23 October, was considered in a Public Bill Committee and reported without amendment on Thursday 16 November 2017. The bill will now be considered at Report Stage on a date that is still to be announced. The Bill aims to:
specify who is liable for damages following accidents caused by automated vehicles; and
improve the network of charging points for electric vehicles.
The Bill meets these aims by extending the application of insurance law from a (human) driver-centric model to one that will cover automated vehicles where the car is essentially the driver. The proposed powers in the Bill would also allow Government to regulate to improve the consumer experience of electric vehicle charging infrastructure, to ensure provision at key strategic locations like Motorway Service Areas (MSAs), and to require that charge points have “smart” capability.
The Bill forms a key part of the regulatory regime required for rapidly evolving automated and electric vehicle technology, a further critical element of which is ensuring the cyber security, data security and integrity of automated and electric vehicles. The Government has started to consider the cyber security of automated vehicles through eight key principles that it published in August 2017. The principles are designed to encourage the industry to work together to enhance cyber security in this sector and place responsibility for system security at board level.
The principles are summarised below:
Principle 1: organisational security is owned governed and promoted at board level
Principle 2: security risks are assessed and managed appropriately and proportionately, including those specific to the supply chain
Principle 3: organisations need product aftercare and incident response to ensure systems are secure over their lifetime
Principle 4: all organisations, including sub-contractors, suppliers and potential third parties, work together to enhance the security of the system
Principle 5: systems are designed using a defence-in-depth approach
Principle 6: the security of all software is managed throughout its lifetime
Principle 7: the storage and transmission of data is secure and can be controlled
The Government has previously stated its ambition to become a “leader” in autonomous technology. Its commitment to creating an adequate regulatory and legislative framework is a clear indication of the Government’s support for the further development and mass production of automated vehicle technologies.
The Automated and Electric Vehicles Bill can be found here.
The full key principles of vehicle cyber security for connected and automated vehicles can be found here.
We recently hosted a series of panel discussions with guest speakers with a range of expertise to discuss these issues. Our latest report, Connected and Autonomous Vehicles: Navigating the Future, relays some of the key questions, challenges and potential solutions that were discussed during these sessions and that are expected to arise as this technology is developed and commercialised. The report is available here.
Please also see our article below in relation to autonomous vehicles legislation in the US.
The National Audit Office (“NAO”) has published a report (the “Report”) which investigates the National Health Service’s (“NHS”) response to the global ransomware cyber-attack known as WannaCry and the impact of the attack on the health services.
In May 2017 the WannaCry attack significantly disrupted critical infrastructure systems across the world, including the systems underlying the NHS. The incident caused significant disruption to the health sector in the UK and has led to the Department of Health (“DoH”) and NHS issuing data security and protection requirements that will need to be implemented by all health care organisations before April 2018 to mitigate the risks associated with a subsequent attack. The Report sets out facts regarding the impact of the attack on the NHS and its patients, reasons why certain parts of the NHS were affected and how the DoH and the NHS national bodies responded to the attack.
Some of the key findings of the Report are listed below:
WannaCry is the largest cyber-attack so far to have affected the NHS. While the full extent of the disruption is not known, 34% of trusts in England were affected and thousands of appointments and operations were cancelled.
DoH had been warned about the NHS’ vulnerability to cyber-attacks in July 2016. Although the DoH had maintenance works under way to address these risks, it did not formally implement them until July 2017 and was not aware of the local NHS organisations’ level of preparedness to deal with such an attack.
While no NHS organisation paid ransom in response to the ransomware demands, the cost of disruption to services suffered by the NHS is not known.
Although the DoH had a plan in place to respond to an attack, the plan had not been tested at a local level, which led to a significant delay in response time when the attack occurred.
All organisations affected by WannaCry shared the same vulnerability and could have protected themselves. While IT security alerts that had been issued by NHS Digital between March and May 2017, these had not been implemented across several organisations, thereby leaving them vulnerable to attack. At the time of the attack, many organisations were therefore working on unsupported Windows operating systems or systems that had not been updated and firewalls that had also not been updated.
For the full report on the investigation into the WannaCry cyber-attack and the NHS click here.
Following its Second Reading in the House of Lords, on 22 November 2017 the draft Data Protection Bill (the “Bill”) passed the Committee Stage and will next be considered at the Report Stage on 11 December 2017. The Bill was initially published on 14 September and once finalised it will repeal the current Data Protection Act 1998 (the “DPA”). The Bill implements various national derogations permitted by the GDPR and also extends the GDPR standards to certain areas of data processing outside EU competence. The Bill also provides for the continuation of the Information Commissioner’s role.
The Bill will therefore stand alongside the GDPR until the UK leaves the EU. At that point, if the UK is no longer part of the EEA as currently envisaged, the GDPR will fall away but the UK Government intends to replicate the regulation in national legislation through the European Union (Withdrawal) Bill – the so-called “Great Repeal Bill”.
The Bill imports much of the DPA and therefore contains few surprises for businesses, but it does at least confirm for the first time the Government’s intention to retain many of the DPA derogations and exemptions, which is welcome news.
To access the draft Bill please click here.
To access the explanatory notes please click here.
UK Government endorses new data security standards and greater patient control over use of health data
The Department of Health published its Review of Data Security, Consent and Opt-Outs (the “Review”) earlier this year. Incidents such as WannaCry (refer to article above for more detail) have created awareness of the ease and speed with which cyber-attacks can cause widespread disruption and highlight the importance of ensuring that organisations implement strong security standards, particularly in the health care sector. A further example of the potential impact in this sector was demonstrated by the security researcher Scott Gayou’s recent finding that the MedFusion 4000 pump made by Smiths Medical has eight separate flaws. In particular, the device was vulnerable to well-known attacks and the technology and system controls did not adequately check who was connecting to the device or sanitise any commands it received. These flaws have the potential to be exploited to change the dosages of critical fluids being delivered to patients. Cyber vulnerabilities such as these must be identified to prevent another WannaCry cyber-attack, or more serious attacks which threaten personal injury or loss of life, and the Review aims to undertake such an analysis of data and systems security and data sharing in the health and social care system.
The Review follows on from previous reviews commissioned by the Department of Health. One review on data security and data sharing in the health and social care system led by the National Data Guardian for Health and Care (“NDG”) (see our previous article for further detail) and the other on current approaches to data security across the NHS led by the Care Quality Commission (“CQC”). These reviews focused on strengthening data security across the health and social care system and proposed a new model for data sharing. Following their publication in mid-2016 the Government undertook an extensive consultation and released its response in Summer 2017 in which it agreed with each of the NDG’s and CQC’s recommendations. The Government committed to:
Protect information through system security standards. It has done this by endorsing the 10 new data security standards recommended in NDG’s report. These data security standards encourage secure handling of personal data, the operation of secure and up-to-data technology, controls and audit trails on access to “personal confidence data”, prompt response to data breaches or “near misses” and that IT suppliers are held accountable for protecting personal data they are tasked with processing, among other things. The Government also agreed to adopt the CQC’s recommendations on data security and to update the Information Governance Toolkit accordingly.
Enable informed individual choice on opt-outs through implementing a new consent and opt-out model for data sharing in NHS England. However the opt-out does not extend to the use of patient’s information in anonymised form. The Department of Health has confirmed that new guidelines are in the process of being developed and will be implemented from March 2018 and become fully effective in 2020.
Apply meaningful sanctions against criminal and reckless behaviour. It sees the application of the GDPR and the Data Protection Bill in May 2018 as providing appropriate sanctions for data breaches and reckless or deliberate misuse of information.
Protect the public interest by ensuring legal best practice and oversight. It will do this by putting the National Data Guardian role and its functions on a statutory footing, through the Information Governance Alliance (“IGA”) publishing anonymisation guidance and by working to clarify the legal framework.
The Network and Information Security Directive will also apply from May 2018 and will reinforce the ten data security standards outlined above.
Click here for the Government’s full response “Your Data: Better Security, Better Choice, Better Care”.
At this year’s Black Hat, a leading information security conference held in Las Vegas, cyber security researchers exposed new vulnerabilities in industrial control systems and warned that malware (including ransomware) could force companies to have to choose between expensive downtime and the potentially less expensive option of paying a cyber attacker’s ransom.
Against this background, a researcher from Tulsa University was granted permission to penetrate and test the security of five different wind farms across the US. He found the same vulnerabilities across multiple wind farms, which included easy-to-guess or default passwords, weak and insecure remote management interfaces and no authentication or encryption of control messages. The researcher also found that if an individual gained physical control of one turbine he could control the entire wind farm and this proved surprisingly easy as he gained physical control of a turbine by picking a simple lock and plugging a Raspberry Pi minicomputer into the network. Once into the system cyber attackers can immobilise turbines, suddenly triggering their brakes to potentially damage them, and even relay false feedback to their operators to prevent the attack from being detected.
The team built three examples of how wind farms could be attacked. This includes malware that could, send commands to other turbines on the network which can then disable those turbines, spread from one automation controller to another across the entire farm and permit man-in-the-middle attacks in respect of the operators' communications with the turbines.
Although the researchers only switched off one wind turbine at the time, a malicious user could have the ability to switch off an entire wind farm. The ease with which the researchers were able to do this highlights an important (and potentially devastating) cyber security issue that wind companies will need to consider. In particular, as wind power grows as an important source of electricity, wind farms may well become a more attractive target for those looking to disrupt and garner attention.
Indeed the Annual Incidents report 2016, issued by the European Union Agency for Network and Information Security (“ENISA”), found that that malware causes the longest lasting incidents. This report reiterates, yet again, the disruptive and sector agnostic effect of cyber-attacks and the need to implement appropriate security protocols.
Herbert Smith Freehills has assisted Airmic (the UK association for those with a responsibility for risk management and insurance) along with others including Lloyd’s of London, to prepare a guide on cyber risk for policyholders. Cyber risk is a key issue currently facing corporate insureds and the guide aims to help risk managers lead the cyber risk conversation in their organisations. The guide provides a framework for corporates to assess their cyber risks and looks at the insurance solutions available in the market to mitigate those risks. It also considers how bespoke-cyber products compare with more traditional policies that corporates are likely to be familiar with. HSF Partner, Greig Anderson, comments in the guide:
“Given the relative infancy of the UK cyber insurance market and the limited number of paid claims to date, insureds are often dubious that cyber insurance policies will pay against the losses they eventually suffer. It is vital that insurance and risk managers take the time to map the digital assets of their business against the threats they face and the scope of cyber insurance cover within their existing policy suite. Only then will they be able to clearly articulate the cover they require to insurers and increase the contract, coverage and claims certainty.”
At the time of the launch, the CEO of Lloyd’s of London, Inga Beale, predicted that the size of the cyber insurance market could double in the next three years. “This threat is really alive for businesses,” she said.
To access the guide on ‘Cyber risk: understanding your risk and purchasing insurance’ please click here.
SFC publishes consultation conclusions and guidelines for reducing and mitigating hacking risks related to internet trading
On 27 October 2017, the Securities and Futures Commission (“SFC”) issued a circular and Guidelines for Reducing and Mitigating Hacking Risks Associated with Internet Trading (the “Guidelines”), which require all licensed or registered persons engaged in internet trading to implement 20 baseline requirements to enhance their cyber security resilience and reduce and mitigate hacking risks. The Guidelines were issued following the SFC’s publication of their conclusions on the related consultation on the same day.
The SFC has also issued:
FAQs providing further guidance and practical examples for implementing the Guidelines; and
A circular attaching Good Industry Practices for IT Risk Management and Cyber Security (Good Industry Practices) which internet brokers may wish to incorporate into their information technology and cyber security risk management frameworks.
The implementation of two-factor authentication (2FA) for clients’ system login will take effect on 27 April 2018, while all other requirements will take effect on 27 July 2018.
Please also click here for our previous bulletin dated 7 July 2017 on the SFC’s consultation on the proposed Guidelines.
As commented by Mr Ashley Alder, the SFC’s Chief Executive Officer, “hacking of internet trading accounts is the most serious cyber security risk faced by internet brokers in Hong Kong”. The publication of the Guidelines therefore illustrates the SFC’s continued commitment to ensuring that cyber security management remains a top priority. Impacted firms should promptly review their existing cyber security systems and policies and make appropriate amendments to ensure timely compliance with the requirements.
Given that the Guidelines only set out the minimum standards required and are by no means exhaustive, senior management should also ensure that all systems and controls are commensurate with the firm’s business needs and operations and that additional cyber security controls are implemented where necessary.
The Monetary Authority of Singapore (“MAS”) announced on 20 September 2017 that it has established a Cyber Security Advisory Panel (“CSAP”) to advise it on strategies to enhance the cyber resilience of Singapore’s financial sector. CSAP is expected to provide global perspectives on evolving technologies and cyber threats and their implication for financial services, as well as insights on best practices in cyber security strategies.
CSAP members are appointed for a two-year term. It consists of 11 members, all of whom are cyber security thought leaders from all around the world, including, Mr David Koh, chief executive of Singapore’s Cyber Security Agency (“CSA”) and Mr Kevin Mandia, chief executive officer and board director of American cyber security services provider FireEye, which has been picked to provide training on incident response and malware analysis at the newly-established CSA Academy. Other panel members include Mr Adrian Asher, group chief information security officer of the London Stock Exchange, Ms Cheri McGuire, group chief information security officer of Standard Chartered Bank, and Mr Vincent Loy, partner of financial crime and cyber leader at PricewaterhouseCoopers Risk Services.
As noted by the managing director of MAS, strong cyber security is critical to sustaining trust and confidence where financial institutions increasingly adopt new technologies and distribute financial services on digital platforms. CSAP will help ensure that Singapore’s financial sector remains dynamic and secure in an increasingly digital world. The inaugural meeting was held in October.
In September 2017, the Singaporean Minister for Communications and Information suggested that the Cybersecurity Bill will be introduced in Parliament in 2018. The draft Bill was released for public consultation on 10 July 2017 by the Cyber Security Agency of Singapore (“CSA”) and the Singapore Ministry of Communications and Information (“MCI”) which concluded on 3 August 2017.
The proposed Cybersecurity Bill has four main objectives:
to provide a framework for the regulation of Critical Information Infrastructure (“CII”) owners;
to provide the CSA with powers to manage and respond to cyber security threats and incidents;
to establish a framework for the sharing of cyber security information with and by CSA officers, and the protection of such information; and
to introduce a lighter-touch licensing framework for the regulation of selected cyber security service providers.
The proposed Bill comes at a time of increasing cyber security incidents globally, including the recent WannaCry and NotPetya/Petna malware attacks, which have prompted organisations to focus increasingly on implementing technical and operational security measures to protect their systems from such incidents.
Please note that the proposed Bill is still pending with possible amendments following the conclusions of the public consultation. In the meantime, companies are encouraged to make a preliminary assessment of whether they are a critical information infrastructure owner or cyber security service provider, and what they would need to do to comply with the obligations under the proposed Bill.
Click here for the firm’s full article on the draft Cybersecurity Bill.
Japan announces intention to create cyber security bureau against backdrop of increased international cooperation
The Japanese Minister of Internal Affairs and Communications, Seiko Noda, announced on 24 August 2017 that the Japanese Government plans to establish a bureau dedicated to cyber security issues.
Minister Noda announced the establishment of the bureau in an interview with local press and an official announcement is expected soon. The bureau will handle cyber security issues that are currently managed on a divisional level within the Ministry. Minister Noda said that she will request funding to set up the bureau in the 2018 budget.
The bureau is widely expected to supplement rather than replace the existing National Information Security Centre (“NISC”), an agency within the Japanese Government responsible for formulating Japan’s information security policies.
Japan has been moving to strengthen its capacity to respond to cyber security incidents in response to what Minister Noda described as a “critical situation talent-wise compared to the U.S., Israel and others”. Developing people with the right skill set has been identified as a priority for Japan in the context of developing the digital economy. In a joint press statement following the US-Japan Policy Cooperation Dialogue on the Internet Economy held in Washington DC in September 2017, Japan affirmed the importance of making cyber security advances for the success of the digital economy. Both Japan and the US discussed national initiatives in response to cyber security threats including workforce development.
These developments follow on from the fifth Japan-US Cyber Dialogue held in July 2017 in which both nations pledged close cooperation and collaboration including with private sector agents to promote cyberspace resilience and security. The two governments announced that the NISC would sign up to US Department of Homeland Security’s Automated Indicator Sharing programme, which allows cyber threat indicators to be shared between government and the private sector in realtime.
Click here to read the joint press statement for the US-Japan Policy Cooperation Dialogue.
Click here to read the joint statement of the Japan-US Cyber Dialogue.
The Cyberspace Administration of China (“CAC”) released its draft Regulations on Protection of Critical Information Infrastructure Security (“Draft Regulations”) on 10 July 2017.
The Cyber Security Law (“CSL”), enacted in 2016, officially introduced the concept of Critical Information Infrastructure (“CII”) for the first time with a section covering the protection of CII. Pursuant to the CSL, CIIs will be afforded special protection measures in addition to those provided under the Multi-layer Protection Scheme (MLPS), the major cyber protection regime envisaged under the CSL. These protection measures include higher standards for protection obligations and closer scrutiny by the Government over the operation of the CII. The CSL authorizes the State Council to publish regulations on the scope of CII and security protection measures.
Although the Identification Guidelines have not been published yet, companies operating large information system facilities in the sectors listed in the Draft Regulations should be aware of the extended scope of CII and evaluate their current cyber security protection systems against the requirements of the Draft Regulations. In particular, entities affected by the localisation requirement should be prepared to review and adjust their current data storage and system maintenance arrangement to comply with the forthcoming regulations on CII protection.
Click here to read the full article.
The Victorian Government has appointed Sven Bluemmel as Victoria’s first information commissioner. This is following the creation of the new Office of the Victorian Information Commissioner on 1 September 2017 (“OVIC”), merging the Office of the Commissioner for Privacy and Data Protection and Victoria’s freedom of information office. The OVIC is now the single body responsible for freedom of information, public sector privacy and data protection laws. This matches similar bodies in New South Wales, Queensland and the Commonwealth.
Bluemmel has been the information commissioner in Western Australia for eight years and was previously a director at Western Australia’s Public Sector Commission. Bluemmel started his role as Information Commissioner at the OVIC on 25 September 2017.
Prime Minister Malcolm Turnbull has recently appointed Alastair MacGibbon as the head of Australia’s Cyber Security Centre (“ACSC”). The Prime Minister appointed MacGibbon following a Government commissioned review of Australia’s security agencies.
The ACSC is a central body that houses 260 cyber security experts from ASIO, Defence, the Australian Federal Police, the Attorney-General’s Department and the Australian Crime Commission. With these recommendations, the ACSC will be set up to provide a 24/7 response to “serious cyber incidents”, and address the community’s needs in relation to emerging cyber security incidents.
MacGibbon is currently the Prime Minister’s special advisor on cyber security as part of the national cyber security strategy. In this role, MacGibbon is responsible for creating and strengthening of partnerships between Australian Governments, private sector, non-governmental organisations and academia to deliver national cyber security capacity and capability. Previously, MacGibbon was Australia’s first eSafety Commissioner and an Agent with the Australian Federal Police.
The Australian Electoral Commission (“AEC”) is in the process of undertaking a holistic security review of the electoral systems that are responsible for delivering the next federal election in either 2018 or 2019. The security review aims to identify any vulnerabilities in the systems and provide recommendations on how they can be addressed. Among other reasons, this was prompted by the need to protect the Australian electoral system against cyber security threats, particularly in light of the speculation of foreign cyber-attacks on the US elections.
In June 2017, a parliamentary committee found that without the allocation of additional funds for an IT overhaul, the AEC’s ageing IT systems risked compromising the integrity of Australia’s federal voting system. The AEC’s current election and enrolment management systems were first introduced in the early 1990s. The AEC had previously told the parliamentary committee that its core election management system and roll management system election processing were at the end of their useful life. A significant part of its current funding is spent on keeping its IT systems operational and secure. As a result, the review may result in the need for either significant upgrades or a replacement of the AEC’s core systems.
The Victorian Government recently released a cyber security strategy to ensure that government services, infrastructure and information are protected from cyber-attacks. This makes Victoria the first Australian state to have a dedicated cyber security strategy.
The strategy was developed in response to the unprecedented scale of incidents and attempted disruption of cyber-attacks to government networks across the world. The purpose of developing the strategy is to protect sensitive citizen and other data against loss, malicious alteration, and unauthorised use. The strategy aims to shift Victoria from its current agency by agency approach, to a whole of government approach.
Specifically, the strategy will include:
appointing a Chief Information Security Officer within the Department of Premier and Cabinet to oversee the government response to ongoing cyber threats and coordinate cross government action;
developing cyber emergency governance arrangements with Emergency Management Victoria;
strengthening partnerships across all levels of government and the private sector to share best practice, intelligence and insights;
coordinating the procurement of proven cyber security services; and
presenting a quarterly cyber security briefing to the Victorian Secretaries Board and the State Crisis and Resilience Committee.
The Victorian Government’s cyber security strategy can be accessed here.
Chinese firm Canyon Bridge Capital’s proposed acquisition of U.S semiconductor company, Lattice Semiconductor Corporation, was blocked by President Trump on grounds of national security.
On 13 September 2017, an order regarding the proposed acquisition of Lattice Semiconductor Corporation stated:
“There is credible evidence that leads me to believe that...Canyon Bridge Capital...through exercising control of Lattice Semiconductor Corporation, a corporation organized under the laws of Delaware (Lattice), might take action that threatens to impair the national security of the United States.”
The order, issued by President Trump, followed a recommendation by the Committee on Foreign Investment in the United States ("CFIUS") that the proposed acquisition of Lattice (a U.S. semiconductor firm) by Canyon Bridge Capital should not be allowed to proceed due to national security concerns. The President’s order gave the parties 30 days to abandon the proposed transaction. This led Lattice and Canyon Bridge to issue a joint statement announcing that they had terminated the proposed deal.
Reactions to the Decision
In the wake of the decision, Canyon Bridge expressed its disappointment in the order, while China’s Commerce Ministry stated that an investigation into national security concerns “should not become a tool for advancing protectionism”.
As companies with deals that may be reviewed by CFIUS attempt to interpret the President’s recent move, at least two things have become clear. First, while Lattice and Canyon Bridge may have assumed that they could persuade President Trump to approve their deal (and bypass CFIUS’s recommendation) by appealing to various mitigating factors, it appears that CFIUS’s recommendations will be granted a high level of deference by the President going forward. Second, President Trump’s decision signals that, despite his desire to attract overseas investment in the U.S., he remains sceptical of Chinese state-owned enterprises, and therefore any attempt by such enterprises to acquire a U.S. business (particularly one operating in the technology sector) may, depending on the nature of the target’s business, encounter CFIUS difficulties. This holds true even in acquisitions that do not involve U.S. deal parties, to the extent such deals would transfer control of U.S. subsidiaries to non-U.S. firms.
It is important that all companies working on deals covered by CFIUS be aware of these trends and act accordingly in order to maximize their chance of obtaining CFIUS clearance.
For more information on CFIUS, including CFIUS best practices, please click here.
Recognising the need for cyber security protections, autonomous vehicles legislation gains traction in US Congress
Legislation that will pave the way for the accelerated nationwide testing and deployment of “highly automated vehicles” (“HAVs”) on US roads has advanced in the US House of Representatives and the Senate, marking the first effort by US lawmakers to impose federal legislation in this fast-moving area. The “Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act” or the “SELF DRIVE Act” passed in the House on 6 September 2017. The US Senate’s Commerce Committee unanimously approved a companion bill, the “American Vision for Safer Transportation Through Advancement of Revolutionary Technologies Act” or “AV START Act” on 4 October 2017. The SDA and ASA would establish the federal government’s exclusive authority to regulate HAV design, construction and performance, and for the first time lead to a uniform, nationwide system of HAV rules. While the House and Senate bills share the same purpose and general structure, they differ in their treatment of consumer privacy, cyber security, and other areas.
The SDA and ASA would clear the way for the large-scale testing and introduction of autonomous cars on US roads primarily by empowering the National Highway Traffic Safety Administration (“NHTSA”), the federal agency responsible for road safety in the US, to overcome current regulatory obstacles through the NHSTA’s interpretation, rule-making, and exemption authority. Both bills grant the NHSTA the authority to expand the number of exemptions from federal vehicle safety standards for HAVs significantly, with annual increases. The two bills differ slightly on the schedule of exemptions. The SDA contemplates 25,000 HAV exemptions in the first year, 50,000 in the second year, and 100,000 in the third year, as compared to the ASA’s initial 15,000 exemptions, rising to 40,000 in the second year, and 80,000 HAV exemptions in the third year. The NHSTA would also be empowered to use its rulemaking and interpretation authority to amend or develop the Federal Motor Vehicle Safety Standards (“FMVSS”) to accommodate self-driving vehicles, e.g., by re-defining the term “driver” under the applicable FMVSS to include the software that powers an HAV.
The House and Senate bills invoke the doctrine of pre-emption to overcome the patchwork of often inconsistent and contradictory state regulations on HAVs with a uniform, federal standard. Stakeholders have long deemed differing state regulations an impediment to HAV innovation and testing. The SDA and ASA would grant the NHTSA exclusive authority to establish standards for the “design, construction, or performance” of HAVs, while preserving states’ ability to impose performance standards that are identical to the FMVSS. Under both the SDA and ASA, states retain their traditional authority to regulate registration, licensing, driving education and training, insurance, law enforcement, crash investigations, safety and emissions inspections, and congestion management or traffic in respect of HAVs. However, the AVA prohibits discrimination against persons with disabilities in the granting of licenses to drive an HAV.
The SDA also addresses the impact that autonomous cars undoubtedly will have on consumer privacy. The ASA’s silence on privacy issues suggests that the Senate may choose to adopt the House privacy provisions in the final bill. Under the SDA, the NHTSA would create a public database to include data on all HAVs benefitting from an exemption. The NHTSA would draw upon the considerable data generated by HAVs in order to support the development of safety rules and standards, including the requirement of safety assessment certifications by HAV manufacturers. As a result, the SDA requires HAV makers to develop a written privacy plan regarding the collection, use, sharing, and storage of information about vehicle owners or occupants collected by a HAV or automated driving system, and outlining how owners and occupants of the vehicle will receive notice of this policy. The US Federal Trade Commission is authorised to study and report on HAV privacy issues, and is empowered to bring enforcement actions for violations of the privacy provisions included in the SDA.
The SDA also proposes more extensive requirements related to cyber security measures. While the SDA requires manufacturers to submit a written cyber security plan and designate a dedicated cyber security officer as a pre-condition to selling or introducing HAVs, the Senate’s AVA requires a plan without conditioning market access on the plan. The cyber security plans mandated by the SDA must include a manufacturer’s practices for detecting and responding to cyber-attacks, unauthorised intrusions, and “false and spurious messages and malicious vehicle control commands.” The SDA also contemplates the creation of a Highly Automated Vehicle Advisory Council, which as part of its mandate would advise on whether the practices introduced by manufacturers are effectively protecting consumer privacy and security.
The House and Senate versions of the HAV legislation also differ on issues such as the inclusion of commercial trucks in the waiver program and the procedure by which the NHSTA will identify federal regulations that must be amended. The Senate’s ASA omits self-driving commercial trucks from the proposed legislation, which will likely facilitate passage by avoiding opposition from US truckers’ unions. Finally, the Senate’s ASA accelerates updates to the FMVSS for HAVs by entrusting the initial comprehensive review of the FMVSS and recommendations for modifications to the Department of Transportation’s technical laboratory, the Volpe Centre. The ASA requires the Volpe Centre to produce its recommendations within six months. By contrast, the House’s SDA permits the Department of Transportation two years to conclude substantially the same task.
The New York State Department of Financial Services’ (“DFS”) Cybersecurity Rules (the “NYDFS Rules”) came into effect for financial institutions in New York on 28 August 2017. The NYDFS Rules govern all banks, insurance companies, and other financial institutions in the State of New York. See 23 NYCRR § 500. New York’s regulation stands out primarily for its detailed and prescriptive approach, requiring the adoption of a suite of best practices and specific security protocols that are expected to play a beneficial normative role for state and federal legislators going forward.
Although the revised NYDFS Rules adopt a less aggressive approach than an earlier September 2016 draft, and largely mirror the federal requirements under the Gramm-Leach-Bliley Act (“GLBA”), the final NYDFS Rules issued in March 2017 surpass existing federal standards in three notable respects:
Broader scope of covered entities: While the GLBA applies to “financial institutions,” the NYDFS Rules range more broadly to include any non-governmental entity that possesses a “certificate, permit, accreditation or similar authorization under the Banking Law, the Insurance Law or the Financial Services Laws” of New York.
Broader scope of covered information: While the scope of data protected under the GLBA encompasses personally identifiable financial information, the NYDFS Rules define “nonpublic information” to include a far broader scope of potential data, including, for example, material “business-related information,” virtually any information specific enough to be used to personally identify an individual, any data related to health care, including an individual’s health records, family history, or payment for health care.
Broader threat definition: While the GLBA focuses on data security, the NYDFS Rules define a “cyber security” event more broadly as “any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information stored on such information system.” This definition recognizes that threats to cyber security may target more than data; they may target networks and systems as a whole.
Key requirements for covered entities under the NYDFS Rules include (1) appointing a Chief Information Security Officer (“CISO”); (2) establishing and maintaining a Cyber Security Program at the covered institution; (3) adopting a written Cyber Security Policy with technical controls and incident response; (4) performing risk assessments; and (5) managing third party service providers, through the required annual penetration tests, bi-annual vulnerability assessments, audit trails, ongoing training programs, and ensuring secure data disposal.
While the NYDFS Rules apply only to the financial and insurance sectors, they are widely seen as standard-setting for the industry. As the procedures mandated by the Rules are not specific to the financial sector, the NYDFS Rules may serve as a model for other state legislatures and, in the process, bring US jurisdictions closer on cyber security policy.
US Government seeks US Supreme Court review of ruling prohibiting government seizure of emails stored outside the United States
We have previously reported (see our prior update, available here) on a US appellate court ruling (issued 14 July 2016) that handed a major victory to Microsoft by finding that US authorities cannot compel US tech companies to disclose email content they store on servers located outside the United States. In January 2017, a vote taken by the judges of the United States Court of Appeals for the Second Circuit on whether the case should be reheard was split 4-4, which effectively resulted in Microsoft’s victory being upheld.
The US Department of Justice ("DoJ"), as expected, sought review this decision in the US Supreme Court, via a petition filed 23 June 2017. Review in the Supreme Court is discretionary. On 16 October 2017, the US Supreme Court granted this petition for review (in a brief order without substantive comment, as is the norm), which means that the Court will hear the case on the merits. The briefing schedule has been posted, with the DoJ's merits brief due 6 December 2017, and Microsoft's response due 11 January 2018. This schedule puts the case on track for argument and a decision sometime in 2018.
Since the issuance of the appellate court ruling, there have been several US lower courts that have rejected the reasoning of that ruling, though these decisions are subject to motions to vacate or similar objections, and eventual appeal. The pending Supreme Court appeal, plus these subsequent lower court rulings, may add pressure on the US Congress to update the Stored Communications Act to better align the legal framework with 21st century technological realities.
The US Securities and Exchange Commission (“SEC”) released the results of the Cybersecurity Examination Initiative via its Office of Compliance Inspections and Examinations (“OCIE”) on 7 August 2017. Per the report, the results of the tests were generally positive, and suggest an increasing awareness of the cyber security policies in the financial services industry and widespread implementation of recommended security protocols since 2014.
Beginning in 2014, the SEC announced a series of cyber security “examinations” for regulated firms, to gauge the “cyber security preparedness” of an industry that has proven to be a frequent target of cyberattacks. The first round of testing was completed in 2014. This second round of examinations were designed to assess whether SEC-regulated firms had developed written policies and procedures adequate to address cyber security threats, as well as to establish evidence that any relevant procedures have been implemented and are actually being followed in practice. As in the 2014 round, OCIE tested 75 firms in total, including broker-dealers, investment advisers, and investment companies registered with the SEC. OCIE focused on a number of areas, including governance and risk assessment, access rights and controls, data loss prevention, vendor management, training, and incident response.
In some areas, the examination revealed high compliance rates. The tests revealed, for example, that all broker-dealers, all funds, and nearly all advisers maintained cyber security-related written policies and procedures addressing the protection of customer or shareholder records and information. Compliance rates were nearly as high for periodic risk assessments, penetration testing, implementation of data loss prevention and system maintenance protocols, as well as regular scanning for system vulnerabilities.
OCIE’s testing did, however, reveal certain areas where cyber security preparedness can improve. Policies and procedures were often not adequately tailored, were not always enforced, or did not reflect firms' actual practices. A significant number of the firms and advisors tested, for example, also had not yet successfully implemented response plans for data breaches. The guidance on cyber security policies provided to employees was found to be too “narrowly scoped” and “vague” in some instances. There were also several instances of insufficiently regular customer protection and security protocol reviews. Training was found lacking in some cases.
Overall, the SEC’s 2017 cyber security examinations for the financial sector have revealed that the right procedures are currently in place in almost all cases. In some cases, however, the SEC notes that additional work is needed to ensure that those procedures are honoured in practice.
The full report can be found here.
With the support of a bipartisan group of US senators, the “Internet of Things Cybersecurity Improvement Act of 2017” (the “CIA Act”) was introduced in the US Senate on 1 August 2017, with the goal of establishing minimum Internet of Things (“IoT”) requirements for federal procurements of connected devices. On 26 October 2017, Democratic legislators in the US Senate and the US House of Representatives introduced related legislation, the "Cyber Shield Act of 2017" (the "CS Act"), in both houses of Congress.
Both Bills eschew direct regulation of IoT manufacturers or IoT devices. The CIA Act leverages the procurement purchasing power of the US federal government to set incentives for manufacturers. This indirect approach to standards-setting by a government regulator has been praised by industry as flexible and forward-looking. By mandating that government procurement only be for IoT devices that meet certain minimum standards for cyber security, the practical import of the CIA Act, if enacted, would be to restrict government contracts to those manufacturers who are willing to incorporate the desired security features into their products, without restricting those manufacturers from selling products that lack those features to the broader market. Sponsors of the CIA Act expect that the minimum standards required for federal contracts will have a normative effect on the broader IoT industry, which is expected to reach 20 billion devices by the year 2020. IoT devices can often be the most vulnerable point in any network to which they are connected, making them a potential target for Distributed Denial of Service (DDoS) attacks. As a significant vulnerability in US networks, IoT devices are a priority for cyber security reform.
Some of the key provisions proposed in the CIA Act include requirements that IoT devices be “patchable” with the latest software security updates from the vendor; that the devices never contain “hard-coded” (i.e., non-modifiable) credentials or passwords; and that the devices be free of other known security vulnerabilities. The CIA Act would require contractors supplying IoT devices under federal government contracts to provide written certification that the devices comply with these and other criteria. It would also give US government agencies the discretion to purchase devices that meet equivalent or more rigorous standards as determined by the National Institute of Standards and Technology.
In addition, the CIA Act incorporates a number of mechanisms designed to probe its own effectiveness and, where requirements are found to be insufficiently robust, for the requirements to fall away. For example, the CIA Act requires the US Office of Management and Budget (“OMB”) to submit a report to Congress five years after the law comes into force to evaluate the CIA Act's effectiveness and make suggestions for updates or amendments. It further empowers OMB to discontinue if warranted the procurement requirements for federal agencies after five years. Similarly, the CIA Act calls upon the US Department of Homeland Security’s National Protection and Programs Directorate to liaise with industry and researchers in the development of coordinated disclosure guidelines for IoT devices sold to the US Government. In addition, and notably, the CIA Act would limit the liability of researchers engaged in good faith systems penetration testing of IoT devices, which in turn would incentivise researchers to uncover vulnerabilities in the devices and to share these issues with the vendor, without exposing the researchers to liability under the Digital Millennium Copyright Act or the Computer Fraud and Abuse Act.
The CS Act, which is more limited in scope than the CIA Act, would establish a voluntary certification program for IoT manufacturers. The CS Act would rely upon a new advisory committee to establish cyber security and data security benchmarks for the industry, drawing its membership from industry representatives, cyber security experts, public interest advocates, and federal employees with relevant expertise. The proposed certification program is intended to increases consumers' awareness of IoT security issues and to engender confidence in consumers that certified IoT manufacturers and products meet the voluntary federal standards.
In a ruling that could spur additional US data breach litigation, a US federal appellate court has ruled that the theft of certain sensitive personal data, without more, is enough to establish a concrete “injury-in-fact” and thus confer standing to bring a data breach lawsuit. See Attias v. CareFirst, Inc., Case No. 16-7108 (D.C. Circuit 1 August 2017).
Briefly, CareFirst, a US health insurer, served roughly one million clients in the greater Washington DC area in 2014, when unknown hackers breached its computers and stole sensitive customer personal data, including social security numbers. CareFirst did not discover the breach until April 2015 and notified its customers thereafter. Customers subsequently filed a proposed class action, alleging that CareFirst failed to adequately encrypt their personal data, and in so doing violated applicable state consumer protection statutes.
Under US law, a plaintiff must plead a concrete injury-in-fact in order to have the right, or “standing,” to bring a lawsuit. A lower federal court dismissed the complaint for lack of standing, finding the risk of future injury to the plaintiffs too speculative to establish an actual injury in fact necessary to bring a suit.
The appellate court reversed. At the outset, the court was careful to note that “[n]obody doubts that identity theft, should it befall one of these plaintiffs, would constitute a concrete and particularized injury.” The question on appeal, however, “was whether the complaint plausibly alleges that the plaintiffs now face a substantial risk of identity theft as a result of CareFirst’s alleged negligence in the data breach.” In finding that they did, the court noted that the data breach exposed customers’ social security and credit card numbers, and even “CareFirst does not seriously dispute that plaintiffs would face a substantial risk of identity theft if their social security and credit card numbers were accessed by a network intruder.” Since an unauthorised party has already accessed the plaintiffs’ personally identifying data, it is at the very least “plausible … to infer that this party has both the intent and the ability to use that data for ill.” In other words, “a substantial risk of harm exists already, simply by virtue of the hack and the nature of the data that the plaintiffs allege was taken,” which this Attias court deemed sufficient to establish an injury-in-fact.
Attias follows several other federal appellate court rulings that have conferred standing upon plaintiffs based on the enhanced risk of future identity theft. Since certain other federal appellate courts have found allegations to such future injury to be too speculative to support standing, the question of whether stolen data or potential identity theft constitutes sufficient injury for standing awaits definitive resolution by the US Supreme Court.