Skip navigation EPAM
CONTACT US

Responsible AI: 5 Keys to Ensuring Your Company is Prepared

Responsible AI: 5 Keys to Ensuring Your Company is Prepared

AI is both an opportunity and a threat: The opportunity is for humans to take advantage and accelerate attainment of desirable outcomes by leveraging machine capabilities, but AI can also be a threat that invokes concern for human rights and welfare.  

Neither opportunity nor threat have gone unseen: Actors from both private and public sectors have engaged. Data-rich domains of transportation, fintech and health sectors have led adoption, as measured by investment, but software, robotics, AI biotech, defense and the broader-based utilization of AI increases at pace. In the public sector, topics of policing, security and identity are featured regularly. The role of academia, as a research provider, thought leader and a quasi-private and public moderator is encouraged. Adopting a responsible AI approach is judicious and will require balancing private, public and academic context.

In pursuit of such responsibility, governments, regulators and industry alike are progressing regulation, frameworks, legislation, policy, tools and guidance to develop best practices and outcomes that are ‘in tune’ with their regional, cultural, socio-economic and political characteristics.

Everyone wants to achieve the competitive advantages and benefits of AI, but no one wants to put the vulnerable at risk, further exacerbate deprivation in society, or at worst, be held captive. It is critical that freedom, justice and transparency are maintained. Responsible AI is a discussion of ethics and trust.

But are global activities aligned, what is the definition of responsible AI and is it shared globally?

Current Approaches & Global Best Practices

Let’s start with China.

Chinese AI unicorns lead AI market capitalization. As of October 2021, China has the biggest share of AI startups, with 19 unicorns headquartered in the country.  

In June of 2019, the National New Generation of Artificial Intelligence Governance Committee in China released eight principles to be observed by those working in AI development:  

  • Harmony and human-friendly
  • Fairness and justice
  • Inclusion and sharing
  • Respect for privacy
  • Safety and controllability
  • Shared responsibility
  • Openness and collaboration
  • Agile governance

Most recently, China grabbed attention with privacy legislation: The personal information protection law (PIPL) was passed in November 2021, following closely on the heels of China’s Data Security and Cyber Security Law. Some critique the PIPL for making national security and state power too dominant a theme; however, it is clear that western companies have been visibly concerned that they cannot adhere to requirements within the bounds of reasonable risk.

While the Chinese version of privacy ethics may be too challenging in the scope of national authority and control exercised, the U.S., with its less authoritative stance, lacks a national cohesion. The Algorithmic Accountability Act of 2021 was not supported, and, given that the U.S. Constitution makes no provision for a ‘right to privacy,’ every U.S. state has been left to its own devices. Some states like California, Virginia and Colorado have created their own privacy protection laws, and many U.S. states have similar bills in legislative process.   

In contrast, the EU is one of the first jurisdictions pursuing regulation ‘designed for purpose.’ The 2016 GDPR privacy legislation has already had a global impact, either directly or as an influence. We should now expect the same role to be played by the EU with regard to leading AI harmonization. The EU’s  proposal for AI regulation and harmonization across member states is designed to foster AI as it will provide the legal certainty necessary to motivate innovation while protecting consumer rights. Like GDPR, the proposed legislation concerns any person or organization (including those based outside the EU managing EU citizen PII data). The accountabilities of the AI Act go further than GDPR by proposing to directly regulate the use of AI systems, with sandbox tests to be maintained and hosted by member states to verify usage. Companies will therefore need to demonstrate their commitment to AI balance by showing the literal software validation method applied.

AI systems are to be classified into four types based on level of risk (see below) and be transparent such that both the service provider and the user know the risk type and method of mitigation. If the provider of the AI system fails to conform the penalty can be as high as EUR 30M or 6% of the company’s global annual turnover … whichever is higher.

How Should Organizations Move Forward with Responsible AI?

Legislation of AI is evolving and it’s clear that there are real unknowns to be navigated in preparing for the provision of AI services. Uncertainty remains. What then can your organization do to make a positive difference to limit risk? Here are some key practical steps:

1. Anticipate that there will be technical, economic and legal challenges that will apply in any trajectory of oncoming cross-jurisdictional AI regulation and legislation. Start to prepare now. Develop go/no-go criteria to proceed or cancel projects based on evaluation criteria and determine who will have the organizational willpower to say no when appropriate.

  • Technology is mostly concerned with defending the fairness of AI algorithms and datasets so that consumer rights, particularly those of vulnerable parties, are not undermined. Divisions within organizations need to ensure that table stakes, such as the privacy requirements of your jurisdiction(s), are met knowing there is no ‘one size fits all’ answer. Start preparing for AI with a simple review of personal data governance and then evolve AI readiness on that firm foundation. 
  • Economic considerations are real. You will need to budget for AI readiness. Know what amount is at your disposal. Engage in proactive and reactive AI response planning. Be prepared to make changes to your operating model which, at a minimum, must be designed to monitor and measure AI opportunities and threats as a part of your “business as usual” risk management plan. As an aspiration, you should define what specific benefits you seek to realize. Is it an increase in profitability? Reduction in cost? Enhanced reputation?
  • Legal challenges will require the involvement of your legal counsel, chief risk officers and compliance stakeholders early on. Define the top AI risks and regulatory scenarios within your industry and run ‘table top’ exercises to validate your practices, such as reporting AI compromises (or failures in AI that might break the EU regulations when introduced). Test your response rigor by asking questions such as: What is an appropriate granularity for risk-based categorization? Are risk categories proportional and fit for purpose? Who’s accountable for fraud initiated by an AI system?  To date, the EU stance has been to treat AI as a product. Are your tort and contract legal response teams ready to manage and respond? Is special training required?

2. Examine the culture and plan for board level/audit committee governance structures within and across jurisdictions and organizations that are fit for purpose by investing early and deeply. As an example, become familiar with the key legislation, enacted or proposed, within your jurisdiction. Declare your vision and mission and communicate it. Establish a long-term benefit committee made up of people who have no connection to the company or its backers, to have the final say on matters including the composition of the organization’s board. Last but by no means least, plan to take organizational personnel through the coaching, mentoring and training necessary to motivate the behavioral changes needed to ensure that a consideration of ethics is paramount.

3. Familiarize with the AI safety companies you trust and respect. Train key staff members on the approaches designed by respected organizational leaders in your region (like OpenAI, the Alignment Research Centre, Redwood Research, etc.). Leverage professional capabilities and services. Use trusted third parties to independently evaluate pros and cons if needed.

4. Determine how you will self-regulate and how you’ll objectively define and measure AI risk within your enterprise risk management program. Monitor the use of datasets and label algorithms and datasets with risk ratings. Decide if your organization is ready to pursue independent use and management of data and algorithms or whether you can adopt the services of third-party providers or open source solutions. 

5. Test and validate that your organization can meet the key principles of transparency and accountability, as it is likely that a significant amount of regulatory compliance can be achieved through demonstrable audited outcomes. Assert methodologies that can be used to validate absence of bias in key artifacts such as source datasets, and models. Verify the test cases themselves. AI is a dynamic system so plan for ongoing AI systems monitoring. Determine how you will build sandboxes to meet internal corporate policy requirements so that you’ll be ready to respond to regulatory demands when finalized. 

Most importantly, engage early for awareness and learn from experience.  Do not wait for regulation to pass—anticipate, lobby and be ready. If managed responsibly, AI is a powerful technology that can liberate your industry, drive profits and growth, and realize reputational success.

GET IN TOUCH

Hi! We’d love to hear from you.

Want to talk to us about your business needs?