Navigating AI and Data Privacy: What Business Leaders Must Know in 2023
Share this
How legislation and regulation impact data privacy and AI use for businesses.
U.S. consumers are increasingly comfortable with sharing personal data with companies, according to a 2022 study by the Global Data and Marketing Alliance (GDMA), but 66% of those consumers surveyed also want to know how their data is collected and used. As businesses continue to leverage artificial intelligence (AI) for insights and efficiencies, the use of data has become a greater topic of concern. In this article, we explore how legislation and regulation may impact data privacy and AI use for businesses.
State legislation for data privacy and AI
California’s strict California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) have likely set the precedent for data privacy laws in the United States because no overarching federal legislation has passed. CPRA, also known as Proposition 24, was approved by California voters in November 2020. CPRA established new standards regarding the collection, retention and use of consumer data. The CPRA took effect January 1, 2023, but a Supreme Court decision delayed enforcement until March 29, 2024. Until then, CCPA remains in effect and is enforceable against businesses.
Colorado’s AI regulations highlight concerns around unfair discrimination. In 2021, the Colorado legislature enacted Senate Bill 169, prohibiting insurers from unfairly discriminating based on an individual’s race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression in any insurance practice using algorithms, predictive models or external consumer data. The law required the Department of Insurance to promulgate rules governing the use of algorithms, predictive models and external consumer data sources, and testing those systems for unfair discrimination. The Department chose to promulgate rules on each line of business, and life insurance was the first line chosen. Recently, a second draft of the proposed governance rule for life insurers was released for comment and several industries’ concerns were addressed. According to the Department, the testing rule will be released soon. It will look at practices of the personal auto industry next.
State regulation for data privacy and AI
The National Association of Insurance Commissioners (NAIC) is working on a new model law to address data privacy and a model bulletin on AI regulation is being developed. The NAIC develops model laws and bulletins for the states to ultimately adopt if they so choose. Laws would have to be passed by each state legislature, and bulletins would have to be issued by each state insurance department.
However, some insurance companies have raised concerns, such as restrictions on sharing data outside the U.S. and unfeasible requirements for consumer data deletion and retention. These limitations could hinder customer service and insights into risks and coverages, which are becoming increasingly interconnected and global.
To better understand AI’s use in the insurance industry, the NAIC Big Data and AI Working Group has issued surveys to different lines of business, including personal auto, homeowners and life insurance. Additionally, the NAIC Innovation, Cybersecurity, & Technology Committee is developing an AI model bulletin outlining a regulatory framework. Industry concerns have surfaced regarding the scope and purpose of the bulletin and will be addressed during the NAIC Summer meeting.
Moreover, the Big Data and AI Working Group is crafting AI model questions to assess a company’s AI usage, ranging from its purpose to the software being used. These efforts aim to enhance understanding and regulation of AI in the insurance sector.
Federal initiatives for data privacy and AI
The federal perspective on data privacy faces challenges due to partisan control and jurisdictional overlap, hindering comprehensive legislation. The House Financial Services Committee passed the Data Privacy Act of 2023, aimed at modernizing financial data privacy laws, empowering consumers to delete data and preventing downstream information use. However, concerns arise about potential issues for long-tail lines and prediction models.
Regarding AI regulation, the Biden administration and Congress are in the early stages of exploring policymaking. Last year, the administration introduced an “AI Bill of Rights,” while non-binding regulatory activities were initiated by the Department of Commerce this year. The rise of generative AI technologies like ChatGPT raised global concerns, leading regulators to investigate their use.
Addressing unintended bias, multiple federal agencies reaffirmed their ability to supervise AI’s impact, warning financial firms about the risk of bias and civil rights violations. Stalled legislative efforts to develop comprehensive data privacy laws are further complicated by the implications of AI applications for consumer data. The recently released SAFE Innovation Framework aims to establish AI standards, addressing the concern that the U.S. may lag in shaping AI regulations.
Global developments on data privacy and AI
The European Union (EU) Commission’s adequacy decision states that the United States provides sufficient data protection compared to the EU, a step towards a more harmonized U.S.-EU data regime. However, potential legal challenges may arise.
In AI developments, the EU is actively working on the AI Act, targeting machine learning and deep learning, which could take effect by year-end. Simple statistical models are no longer covered. The EU-U.S. Trade and Technology Council (TTC) introduced a Joint Roadmap for Trustworthy AI and Risk Management, aiming to establish shared terminologies and metrics to measure AI risk across the Atlantic. These initiatives signify the EU’s proactive approach to AI regulation and data privacy.
Zurich’s approach to AI and Data Commitment
Zurich’s approach to AI involves utilizing it to manage and facilitate data, aiming to achieve efficiencies and generate insights that benefit our operations, customers, brokers and the insurance industry. With the expected surge in data production, we are proactive in understanding the legislation and regulations that impact data privacy and AI use, working to shape sensible legislation.
Zurich is committed to responsible data handling, fairness, privacy, transparency and accountability. Our public data commitment demonstrates our dedication to being a leader in ethical data use and supporting responsible technology practices. Zurich also follows an AI Assurance Framework for taking precautions to ensure the adoption of AI is well managed and includes safeguards. This approach reflects Zurich’s mindful and responsible stance towards AI and data management.
In an evolving landscape of data privacy and AI regulation, businesses must be aware of various state, federal, and global initiatives. Business leaders need to stay informed and comply with relevant legislation while leveraging AI to benefit their organizations and customers responsibly.
Lynne Grinsell is VP, Head of State Affairs for Zurich North America. She brings a comprehensive understanding of legislative and regulatory affairs to her role at Zurich. With an extensive background in navigating complex insurance compliance landscapes, she has effectively advocated for the company’s interests while ensuring full compliance with relevant laws and regulations. Grinsell’s expertise in shaping public policy and her ability to anticipate regulatory changes have been instrumental in positioning Zurich as a leader in the insurance industry. Her commitment to maintaining ethical standards and fostering positive relationships with regulatory bodies has earned her recognition as a trusted authority in the field.
1. GDMA (Global Data & Marketing Alliance). (2022). US Data Privacy 2022. Retrieved from https://globaldma.com/wp-content/uploads/2022/03/GDMA-US-Data-Privacy-2022.pdf
2. Thompson Hine LLP. (n.d.). California Consumer Privacy Act Compliance. Retrieved from https://www.thompsonhine.com/services/privacy-cybersecurity/california-consumer-privacy-act-compliance/
This article is provided for informational purposes only. Please consult with qualified legal counsel to address your particular circumstances and needs. Zurich is not providing legal advice and assumes no liability concerning the information set forth above.
By Lynne Grinsell
Vice President, Head of State Affairs
Navigating AI and Data Privacy: What Business Leaders Must Know in 2023
Share this
How legislation and regulation impact data privacy and AI use for businesses.
U.S. consumers are increasingly comfortable with sharing personal data with companies, according to a 2022 study by the Global Data and Marketing Alliance (GDMA), but 66% of those consumers surveyed also want to know how their data is collected and used. As businesses continue to leverage artificial intelligence (AI) for insights and efficiencies, the use of data has become a greater topic of concern. In this article, we explore how legislation and regulation may impact data privacy and AI use for businesses.
State legislation for data privacy and AI
California’s strict California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) have likely set the precedent for data privacy laws in the United States because no overarching federal legislation has passed. CPRA, also known as Proposition 24, was approved by California voters in November 2020. CPRA established new standards regarding the collection, retention and use of consumer data. The CPRA took effect January 1, 2023, but a Supreme Court decision delayed enforcement until March 29, 2024. Until then, CCPA remains in effect and is enforceable against businesses.
Colorado’s AI regulations highlight concerns around unfair discrimination. In 2021, the Colorado legislature enacted Senate Bill 169, prohibiting insurers from unfairly discriminating based on an individual’s race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression in any insurance practice using algorithms, predictive models or external consumer data. The law required the Department of Insurance to promulgate rules governing the use of algorithms, predictive models and external consumer data sources, and testing those systems for unfair discrimination. The Department chose to promulgate rules on each line of business, and life insurance was the first line chosen. Recently, a second draft of the proposed governance rule for life insurers was released for comment and several industries’ concerns were addressed. According to the Department, the testing rule will be released soon. It will look at practices of the personal auto industry next.
State regulation for data privacy and AI
The National Association of Insurance Commissioners (NAIC) is working on a new model law to address data privacy and a model bulletin on AI regulation is being developed. The NAIC develops model laws and bulletins for the states to ultimately adopt if they so choose. Laws would have to be passed by each state legislature, and bulletins would have to be issued by each state insurance department.
However, some insurance companies have raised concerns, such as restrictions on sharing data outside the U.S. and unfeasible requirements for consumer data deletion and retention. These limitations could hinder customer service and insights into risks and coverages, which are becoming increasingly interconnected and global.
To better understand AI’s use in the insurance industry, the NAIC Big Data and AI Working Group has issued surveys to different lines of business, including personal auto, homeowners and life insurance. Additionally, the NAIC Innovation, Cybersecurity, & Technology Committee is developing an AI model bulletin outlining a regulatory framework. Industry concerns have surfaced regarding the scope and purpose of the bulletin and will be addressed during the NAIC Summer meeting.
Moreover, the Big Data and AI Working Group is crafting AI model questions to assess a company’s AI usage, ranging from its purpose to the software being used. These efforts aim to enhance understanding and regulation of AI in the insurance sector.
Federal initiatives for data privacy and AI
The federal perspective on data privacy faces challenges due to partisan control and jurisdictional overlap, hindering comprehensive legislation. The House Financial Services Committee passed the Data Privacy Act of 2023, aimed at modernizing financial data privacy laws, empowering consumers to delete data and preventing downstream information use. However, concerns arise about potential issues for long-tail lines and prediction models.
Regarding AI regulation, the Biden administration and Congress are in the early stages of exploring policymaking. Last year, the administration introduced an “AI Bill of Rights,” while non-binding regulatory activities were initiated by the Department of Commerce this year. The rise of generative AI technologies like ChatGPT raised global concerns, leading regulators to investigate their use.
Addressing unintended bias, multiple federal agencies reaffirmed their ability to supervise AI’s impact, warning financial firms about the risk of bias and civil rights violations. Stalled legislative efforts to develop comprehensive data privacy laws are further complicated by the implications of AI applications for consumer data. The recently released SAFE Innovation Framework aims to establish AI standards, addressing the concern that the U.S. may lag in shaping AI regulations.
Global developments on data privacy and AI
The European Union (EU) Commission’s adequacy decision states that the United States provides sufficient data protection compared to the EU, a step towards a more harmonized U.S.-EU data regime. However, potential legal challenges may arise.
In AI developments, the EU is actively working on the AI Act, targeting machine learning and deep learning, which could take effect by year-end. Simple statistical models are no longer covered. The EU-U.S. Trade and Technology Council (TTC) introduced a Joint Roadmap for Trustworthy AI and Risk Management, aiming to establish shared terminologies and metrics to measure AI risk across the Atlantic. These initiatives signify the EU’s proactive approach to AI regulation and data privacy.
Zurich’s approach to AI and Data Commitment
Zurich’s approach to AI involves utilizing it to manage and facilitate data, aiming to achieve efficiencies and generate insights that benefit our operations, customers, brokers and the insurance industry. With the expected surge in data production, we are proactive in understanding the legislation and regulations that impact data privacy and AI use, working to shape sensible legislation.
Zurich is committed to responsible data handling, fairness, privacy, transparency and accountability. Our public data commitment demonstrates our dedication to being a leader in ethical data use and supporting responsible technology practices. Zurich also follows an AI Assurance Framework for taking precautions to ensure the adoption of AI is well managed and includes safeguards. This approach reflects Zurich’s mindful and responsible stance towards AI and data management.
In an evolving landscape of data privacy and AI regulation, businesses must be aware of various state, federal, and global initiatives. Business leaders need to stay informed and comply with relevant legislation while leveraging AI to benefit their organizations and customers responsibly.
Lynne Grinsell is VP, Head of State Affairs for Zurich North America. She brings a comprehensive understanding of legislative and regulatory affairs to her role at Zurich. With an extensive background in navigating complex insurance compliance landscapes, she has effectively advocated for the company’s interests while ensuring full compliance with relevant laws and regulations. Grinsell’s expertise in shaping public policy and her ability to anticipate regulatory changes have been instrumental in positioning Zurich as a leader in the insurance industry. Her commitment to maintaining ethical standards and fostering positive relationships with regulatory bodies has earned her recognition as a trusted authority in the field.
1. GDMA (Global Data & Marketing Alliance). (2022). US Data Privacy 2022. Retrieved from https://globaldma.com/wp-content/uploads/2022/03/GDMA-US-Data-Privacy-2022.pdf
2. Thompson Hine LLP. (n.d.). California Consumer Privacy Act Compliance. Retrieved from https://www.thompsonhine.com/services/privacy-cybersecurity/california-consumer-privacy-act-compliance/
This article is provided for informational purposes only. Please consult with qualified legal counsel to address your particular circumstances and needs. Zurich is not providing legal advice and assumes no liability concerning the information set forth above.
By Lynne Grinsell
Vice President, Head of State Affairs