The Challenges of Artificial Intelligence: A South Carolina Response

Quality of Life
August 29, 2024

Jennifer Buckley

Research Fellow

According to Pew research, 90% of Americans claim to have some awareness of AI. That is not surprising, as the term went from unknown to ubiquitous in only three years. Or was it three weeks? Yet, that same Pew survey found that only one-in-three respondents claim to have heard a lot about it. 

Artificial Intelligence (AI) is rapidly transforming from a futuristic concept into a core component of modern life. From automating tasks to providing advanced analytics, AI is revolutionizing how industries operate. The potential of AI is vast, and what we see today is merely the tip of the iceberg. McKinsey & Company estimated in 2023 that “AI could add the equivalent of $2.6 trillion to $4.4 trillion annually” to the economy, driving significant GDP growth. In 2024, the AI market has already exceeded 184 billion U.S. dollars, nearly quadrupling its size from the previous year.  

However, with great potential comes great responsibility. 

South Carolina has recognized the need for a thoughtful approach when it comes to Artificial Intelligence: one that fosters innovation while addressing the risks. On November 13, 2023, South Carolina House Speaker Murrell Smith announced the establishment of a dedicated Artificial Intelligence Committee of the House of Representatives, chaired by State Representative Jeff Bradley (R-Beaufort). This committee is devoted to understanding AI, cybercrime, and cybersecurity, “exploring both the positive and negative implications of the rapidly advancing technology in the state.” The creation and work of this committee and the legislation it may spawn has positioned South Carolina as a leader in openness to AI policy development, ensuring the state remains informed and prepared as AI continues to evolve. 

In June 2024, South Carolina’s Department of Administration (ADMIN) released the South Carolina’s State Agencies’ Artificial Intelligence (AI) Strategy. This strategy outlines the approach State Agencies will take when dealing with AI, guided by the Three Ps: Promote, Protect, and Pursue 

Below, we explore three key AI challenges facing South Carolina and how the state plans to address each. 

 

Challenge 1: Data Privacy and Security 

As AI systems grow more sophisticated, so too do the means and methods of cybercriminals. Data breaches and cyberattacks are on the rise, particularly targeting small businesses. The Identity Theft Resource Center (ITRC) reports that nearly three-quarters (73%) of US small business owners faced a cyber-attack last year, with most of these attacks targeting employee and customer data. 85% of security professionals who observed a rise in cyberattacks over the past year attribute these attacks to the use of generative AI as a mechanism used to conduct the crime. 

Moreover, AI systems, which rely on vast amounts of data, can be targeted and exploited by cybercriminals. Breaches can expose sensitive information, including personal data, financial records, or research data. If not properly secured, these systems can become targets for cybercriminals seeking to gain access to private data or engage in identity theft. South Carolina has had to learn this lesson the hard way 

AI is a double-edged sword; AI drives profits, but criminals can use generative AI to attack data systems, and companies that have AI systems are more vulnerable to exploitation. This makes data privacy and security a top priority for state governments. 

South Carolina’s Solution: Protect

“Protect” is one of the cornerstones of South Carolina’s AI strategy, making privacy and security integral to any future AI policy decision. 

The strategic report names South Carolina Department of Administration’s Division of Information Security (DIS) Information Security and Privacy Standards responsible for creating a risk management strategy in the near-term for state government data systems. This strategy will align with the SC DIS framework to identify and mitigate risks associated with AI adoption through wide-ranging case evaluation. South Carolina’s government is committed to developing “strong security practices and guardrails to protect citizens from the potential risks of AI” (p. 10). The DIS is charged with ensuring that privacy and protection is paramount for individuals and small businesses alike. 

 

Challenge 2: Ethical and Responsible use of AI 

The role of AI in decision-making is increasing. The technology’s power to synthesize information into a clear judgment is being utilized in academia, healthcare, employment, criminal justice, and banking. Users turn to AI programs, hoping for an objective source of guidance. 

However, the quality of the AI output depends on the quality of the input. If an AI system is fed biased or false data, it will offer a response that is likewise biased or untrue. AI algorithms have the potential to inadvertently perpetuate and even exacerbate existing prejudice, leading to unfair outcomes. As businesses increasingly incorporate AI into their operations, ethics are at the forefront of the discussion. 

Currently, there is little-to-no government oversight on how AI programs are coded in the U.S., and no metrics for ensuring fairness, unbiased responses, and truthful answers. While federal regulation is not the answer, it is important to note that unmitigated bias in algorithms is an issue. AI amplifies the negative impact of bias: any flaw in systems could affect millions of people, exposing companies to class-action lawsuits. To ensure ethical and responsible use of AI, states must consider issues such as prejudice in AI algorithms, transparency in AI decision-making processes, and accountability for AI-driven actions. 

The challenge lies in ensuring that AI systems are designed and deployed in ways that respect individual rights and societal ethics without stifling innovation or overly restricting the information AI systems can access.  

South Carolina’s Solution: Promote

Ethical practices fall under the “Promote” section of South Carolina’s AI strategy. The document enumerates measures to establish AI governance to ensure usage aligns with organizational values, ethics, and legal requirements (p. 13). The action steps towards AI governance involve collaboration with academic institutions, industry experts, and public stakeholders to create ethical frameworks that guide AI development and application.  

Following a recommendation from AI professionals at IBM, South Carolina will establish an AI Center of Excellence (COE), a coalition of SC-based IT leaders that will provide best practice insights, evaluation of methods, and ongoing collaboration on AI projects. This COE will foster communication and encourage safe, fair, and ethical practices for government AI use. 

 

Challenge 3: Workforce Displacement 

Artificial Intelligence is able to streamline routine processes for businesses and individuals, improving the efficiency in operations. However, this capability has a dark side. While AI has the potential to create new job opportunities, it also threatens to displace a significant number of workers. 

Some studies have shown that adding one robot to a manufacturing process displaces an average of 6.6 positions. Goldman Sachs published a 2023 report estimating that AI could impact 300 million jobs globally due to automation.  

Workforce displacement is not a guaranteed result of AI integration. But there is no doubt that the introduction of generative AI will continue to change the landscape of the workforce. AI’s impact on employment and job-availability is an ongoing concern for state legislatures.  

South Carolina’s Solution: Pursue

Addressing workforce displacement is a complex issue, and the South Carolina House Committee on Artificial Intelligence is looking to experts for guidance. In testimony before the committee on January 18, 2024, Dr. Homayoun Valafar, Professor of Computer Science and Engineering at the University of South Carolina, encouraged the legislators to look to the past as a roadmap to envision the future. He utilized the historical example of the ATM as a representation of how introducing technology shifted the type of tasks workers now fill but did not replace workers themselves. Dr. Valafar believes that this could be the same for Artificial Intelligence; technology will shift the nature of work but not replace human employees. He emphasized the importance of preparing the future workforce and training the current workforce for transition: an essential step to mitigate workforce displacement. 

A key part of pursuing AI is increasing AI literacy, or the understanding of what AI is and how to operate it safely and responsibly. South Carolina’s AI strategy includes initiatives to promote AI training opportunities that will increase AI literacy, following Dr. Valafar’s recommendation (p. 14). These efforts to increase operative literacy in AI will help South Carolina’s workforce adapt to new roles and responsibilities as the technology grows. 

 

Conclusion 

South Carolina’s evolving AI strategy reflects a forward-thinking attitude towards technology.  It is a strategy that embraces the benefits, proactively addresses the risks, and resists the urge to regulate. Through the insights gained from the proceedings of the House AI committee, South Carolina is set to embrace the “Three Ps” as balanced principles to guide state agencies in their use of AI. Business will have their say as well.  

Data privacy, ethics, and workforce displacement are all challenges for a would-be tech-savvy state. However, with thoughtful free-market-respecting policies and methods of ongoing review, the Palmetto State is poised for success in AI. Then, South Carolina will not simply keep up with the AI revolution— it can set the pace. 

Palmetto Promise plans to continue our research into artificial intelligence with a continued series on this topic, so stay tuned.