Tech Responsibility in the AI Era

The use of technology often goes awry, not because of the technology itself, but because of our narrow focus. We tend to design products and applications based on our own perception of a target audience or use case, without considering a wider range of potential users or usage scenarios. This restriction can lead to issues arising from unanticipated misuse of technology.

Artificial intelligence (AI), in particular, presents unique social and ethical challenges. These challenges stem from the nature of AI itself, including its reliance on statistical analyses, its propensity to reinforce existing biases, and its limitation in understanding the scope of its own knowledge. Moreover, challenges can also rise from what AI developers and users do not fully grasp, such as the data used in AI models, the rationales behind AI’s workings, and AI’s capacity to mislead users into interpreting it as human-like intelligence.

Despite these challenges, AI does not fundamentally alter the field of responsible technology, but instead brings the existing problems into sharper focus. Issues like intellectual property, for instance, have been around for centuries. However, AI, and specifically large language models (LLMs), have raised new queries regarding what constitutes fair use when machines can replicate an individual’s unique style or voice.

The principles of responsible technology, which have been formulated over decades, are still pertinent in this era of AI evolution. Key principles such as transparency, privacy and security, careful regulation, consideration of societal and environmental impacts, and fostering diversity and accessibility, remain essential to ensure technology benefits humanity.

The significance of these principles is evident in the 2023 report, “The state of responsible technology,” released by MIT Technology Review Insights in partnership with Thoughtworks. The report reveals that 73% of the surveyed business leaders agreed that responsible technology utilization will soon be as crucial as business and financial considerations in technology-related decision-making.

Additionally, this current phase of AI development presents a distinctive chance to surmount obstacles that have previously hindered the progress of responsible technology. According to the survey, the top hindrances were a lack of senior management awareness (52%), organization-wide resistance to change (46%), and internal competing priorities (46%). However, organizations that have clearly defined AI strategies and comprehend its transformative potential can use this as an opportunity to overcome these barriers.

To seize this moment of technological disruption, it’s crucial to integrate the principles of responsible technology into the transition. There’s a considerable sense of optimism about our ability to utilize AI effectively while maintaining control through prudent regulations and effective processes.

The ultimate aim of responsible technology is to expand our perspective, ensuring we consider the broader implications of technology rather than only focusing on the specific problem we’re aiming to solve. This broader perspective aids in understanding our interconnectedness with others in the world.

For further information on Thoughtworks’ insights and recommendations on responsible technology, please visit Looking Glass 2024.

This content was produced by Insights, the custom content team at MIT Technology Review. It was not authored by the editorial staff of MIT Technology Review.