首席人工智能合规官时代的到来 |百科
0
ChatGPT didn’t write this article. Nor did Google’s Bard. But chat-based AI tools (also known as chatbots) are disrupting the white-collar workplace as we know it.Jobs with the highest exposure to artificial intelligence, according to a recent analysis of government data by Pew Research Center, include budget analysts, data entry professionals, tax preparers, technical writers, and web developers. AI may either replace or assist the most important activities of these jobs, according to Pew. This means that AI could soon replace nearly one in five U.S. jobs whose primary tasks are to acquire and analyze data or information.While public sector HR professionals may not be on the hit list of the jobs most exposed to AI, HR pros are likely to have varying levels of exposure to AI, including legal pitfalls that may not be readily evident.So, to mitigate the damages from the inevitable lawsuits, how can cities, states, agencies, and organizations best prepare for this era of grand automation?First, instead of focusing on the imminent threat of AI replacing a litany of jobs, HR professionals in the public sector should consider the ramifications of using AI tools to assist with their current day-to-day responsibilities. For example, if your AI tool utilization includes prompting AI to write a job description, have you considered where the data is coming from?The vital importance of protecting an individual’s private data is another aspect of AI usage that demands HR’s attention and stewardship. As for the so-called objectivity of using AI in recruitment, consider the well-documented track record of detecting bias and discrimination within the algorithm’s assessment of candidates.These key considerations were raised last week by PSHRA CEO Cara Woodson Welch and Fisher & Phillips attorney Anne Hanson during a compelling and provocative fireside chat at the annual PSHRA Conference in San Diego.“From a human resources perspective, you have to be extra careful about what [identifiable information] you’re plugging in and what tool you’re using,” Welch said. “Do you have a limited sandbox?”The Most Popular AI Tools and Their ShortcomingsChatGPT is not a search engine, nor should it be used for research, particularly in the workplace. It is a large language model-based chatbot. Welch pointed out a cringeworthy example of attorneys conducting legal research on ChatGPT and citing court cases that didn’t exist in their argument.Bard is an AI chatbot that uses machine learning, natural language processing, and generative AI to understand user prompts and provide text responses. However, Google explains that Bard will not always “get it right” and may even give inaccurate or offensive responses.Because Bard uses extensions to connect you with useful content, it also may share parts of your conversations and other relevant information, like your location, with other services. These services may use that info for their improvement, even if you later delete your Bard activity. Note: You can turn off this feature, but many aren’t aware of this section of the user agreement.“It’s all a predictive text analytic,” scholar Safiya Noble said of ChatGPT in her discussion with NPR’s Ailsa Chang.Noble voiced concern that it’s not a human being or a set of people or experts culling through [the data] with a type of understanding to help us make sense of the output.“I’m concerned about the way in which machine learning and predictive analytics are both overdetermining certain kinds of outcomes,” said Noble, professor of gender studies and African American studies at UCLA, and author of the book, Algorithms of Oppression: How Search Engines Reinforce Racism.One key takeaway: AI doesn’t make judgments; it correlates.Hiring managers could be reinforcing bias by the correlations being made, Welch said. “Are we sourcing fewer candidates or candidates who are less diverse because of this [AI] tool?”The Human ElementYet another digital age challenge will be training and developing staff to become more comfortable using algorithms for positive results.This will lead to a shift in the skills necessary to increase AI’s usefulness and thrive in this brave new work environment.In addition to having AI help with onboarding and administering rote forms, strategic HR business partners should consider how AI makes business processes better, Hanson said.With the complexity and confusion surrounding the limitations and improper use of AI, Hanson said she expects to see a rise in people with a new job title: chief AI compliance officer.“What kind of guardrails do we need to put in [place]?” she asked.At the close of their discussion, Welch and Hanson both emphasized the critical intervention of periodic HRIS audits. They also threw down the gauntlet and suggested that an ethical someone (perhaps a future chief AI compliance officer) will need to step up and determine what AI looks like in their organization. This will ensure that HR operations fully comply with regulations and procedures.“AI isn’t here to replace the mission of public service,” Welch said. “In the public sector, you’re very unique to your culture and community . . . and any policy that you put into place has to reflect the people you serve.There’s always going to be a human element that is necessary to this work.”