LinkedIn: Using AI Responsibly

LinkedIn posted its AI principles today. These are all high-level which is a good starting point, but implementing rules and policies requires more details.

AI is not new to LinkedIn. LinkedIn has long used AI to enhance our members’ professional experiences. While AI has enormous potential to expand access to opportunity and transform the world of work in positive ways, the use of AI comes with risks and potential for harm. Inspired by, and aligned with our parent company Microsoft’s leadership in this area, we wanted to share the Responsible AI Principles we use at LinkedIn to guide our work: 

  • Advance Economic Opportunity: People are at the center of what we do. AI is a tool to further our vision, empowering our members and augmenting their success and productivity. 
  • Uphold Trust: Our commitments to privacy, security and safety guide our use of AI. We take meaningful steps to reduce the potential risks of AI.
  • Promote Fairness and Inclusion: We work to ensure that our use of AI benefits all members fairly, without causing or amplifying unfair bias.  
  • Provide Transparency: Understanding of AI starts with transparency. We seek to explain in clear and simple ways how our use of AI impacts people. 
  • Embrace Accountability: We deploy robust AI governance, including assessing and addressing potential harms and fitness for purpose, and ensuring human oversight and accountability.
“Using AI Responsibly,” LinkedIn In the Loop Newsletter (March 2023)

As with every new technology, it can be used for either the betterment of society or malign purposes. Setting out principles helps frame product management and engineering in building their models, promoting trust, and setting guidelines to reduce negative effects (e.g., recapitulating bias, spreading misinformation and disinformation).

Transparency helps reduce negative effects as well. If it isn’t known why a recommendation was made, how can it be trusted? Furthermore, how does one know that the AI isn’t recapitulating somebody’s IP; gathering information from incorrect, malign, or outdated sources; or making incorrect assumptions? Thus, black box AI should be avoided.

Microsoft is the early leader in implementing Generative AI, a category of AI “algorithms that generates new output based on data they have been trained on” (Gartner). The best known of these is ChatGPT which generates text and carries on chat conversations. Microsoft recently invested $10 billion in OpenAI, the developer of ChatGPT and other generative AI tools. It is quickly moving to integrate it into Bing and other products.

On Monday, I will post about ChatGPT being integrated into Microsoft’s Viva Sales product.


GZ Consulting Resources

Leave a Reply