How To Design For AI
Understanding Artificial Intelligence is crucial as we continue to integrate it into various aspects of our lives and work. While many have their definitions of what AI means, I always find it useful to ground myself in well-articulated perspectives in order to explore the challenges and design principles effectively.
Artificial Intelligence, as defined by the University of Finland’s course Elements of AI (which I recommend strongly to those interested in such things) is characterised by two fundamental qualities: autonomy and adaptivity. This definition encapsulates AI as systems capable of performing complex tasks without continuous human oversight and capable of improving their operations based on accumulated experience and data. Such systems demonstrate autonomy by operating independently and adaptively by learning from their interactions with the world and the data they process.
Complementing this educational perspective, Andrew Ng from Stanford University offers a more technical viewpoint. He describes AI as a collection of tools — specifically, the methodologies or mechanisms through which AI systems learn and perform tasks. These tools are manifested through various forms of machine learning, including supervised, unsupervised, and reinforcement learning. Ng’s approach helps demystify the technical underpinnings of AI, making it more accessible to those looking to understand how AI systems are taught to act.
Together, these definitions provide a dual perspective on AI: one focusing on the theoretical and practical attributes of AI systems (autonomy and adaptivity), and the other on the technical mechanisms that enable these systems to function and evolve. This dual framework lays the groundwork for discussing how AI can be designed responsibly and effectively to enhance user experiences and meet emerging challenges.
The Impact of AI Tools
The immense impact AI will have on both business and society is one that is rarely underestimated. As John Culkin famously once said: “We shape our tools, and thereafter, our tools shape us.” His statement reflects the essence of what is known as ontological design, a concept suggesting that the things we design, in turn, design our experiences and behaviours.
Consider the design of a chair, which not only supports sitting but also influences our posture and, by extension, our health and comfort. This principle extends to digital tools and technologies, particularly AI, which increasingly influences our decision-making processes, interactions, and accessibility to information.
AI tools like Large Language Models and image generators are not just passive utilities but active participants that mould our perceptions and interactions. These tools are instrumental in shaping sectors such as content creation, customer service, and even more personal aspects like our engagement with social media. By understanding that every design choice can have far-reaching implications on human behaviour and societal structures, we underscore the necessity for thoughtful and responsible design in AI technologies.
These AI systems, characterised by their autonomy and adaptivity, are pivotal in setting the stage for a future where technology’s role is transformational and is also responsibly integrated into the fabric of everyday life. As such, the design decisions we make today are crucial in steering AI’s development towards outcomes that are beneficial and equitable, underscoring the need for a principled approach to AI design.
Ontological Design
The concept of ontological design is particularly relevant when considering the impact of AI on human experiences. It posits that by designing the world around us, from the simplest tools to complex systems, we are simultaneously designing the human experience. This theory highlights the cyclical relationship between creation and creator — everything we design ultimately designs us back. It’s a profound reflection on the impact of our creations on our ways of living, thinking, and interacting with our environment.
Consider the design of technological interfaces. How we interact with digital platforms — through touch screens, voice commands, or even gestures — shapes our cognitive and physical engagement with technology. Each interaction not only reflects current technological capabilities but also influences future developments in interface design, user experience, and even the ergonomics of digital interaction.
This reciprocal relationship is evident in how AI systems learn from the data they process. As these systems become more integrated into sectors like healthcare, finance, and education, they not only operate based on the data fed into them but also generate outputs that can influence policies, decisions, and practices in these fields. With this in mind, the design of AI systems is not just a technical challenge but an ethical one, as the decisions embedded within these systems can have significant, sometimes unintended, impacts on society.
Through the lens of ontological design, we begin to understand the importance of designing AI systems that are not only efficient and effective but also inclusive, ethical, and aware of the broader consequences of their operations. It compels designers and developers to think critically about the values and assumptions embedded in AI systems and to strive for designs that enhance and enrich human experiences rather than constrain or misguide them. This approach emphasises the need for a forward-thinking mindset in AI development, where every design is weighed for its broader cultural, social, and ethical implications.
Designing AI With Responsibility
The ethical dimensions of AI design are not just academic or speculative; they are immediate and impactful. As AI technologies become embedded in more aspects of daily life, the imperative to design these systems with an awareness of their ethical implications grows. The challenge of bias in AI systems, often cited in discussions around AI ethics, exemplifies this need. Bias in AI is not an indication of inherent malevolence within the technology but rather a reflection of the data it is trained on. Datasets are mirrors of the broader societal biases, reflecting historical and systemic inequalities that can perpetuate discrimination if not carefully managed.
The opportunity here lies in the deliberate selection and curation of the data used to train AI systems. By choosing datasets that are diverse and representative of all facets of society, designers can mitigate some of the biases that AI might otherwise perpetuate. This approach not only improves the fairness and efficacy of AI applications but also aligns with broader societal goals of equity and inclusion.
Safety and security in AI design also demand urgent attention. Consider the challenges associated with data privacy and the potential for AI-enhanced systems to be used in ways that might compromise individual rights and freedoms. The design of AI must therefore incorporate robust security measures to protect against data breaches and misuse, ensuring that AI systems are safe for all users. On top of this, the psychological safety of interactions with AI — how these systems make users feel — must be considered. This encompasses designing AI interactions that are respectful, non-aggressive, and supportive of user well-being.
Finally, designing for the probability rather than certainty reflects the non-deterministic nature of many AI systems. Unlike traditional software where outcomes can be tightly controlled and predicted, AI often behaves in ways that can be unexpected. This unpredictability requires a design approach that accommodates variability and emphasises the importance of user feedback in shaping AI behaviour.
Design paradigms must therefore not only focus on how AI systems perform tasks but also on how they adapt and respond to user interactions in real-time.
Designing For Probability
Designing AI systems with a focus on probability rather than certainty introduces a unique set of challenges and considerations. AI is often non-deterministic, meaning that its behaviour and the outcomes it generates can be unpredictable. This characteristic is particularly evident in systems using machine learning algorithms that evolve based on their training data and ongoing user interactions.
The unpredictability of AI requires a design philosophy that accommodates a range of potential outcomes and emphasises the importance of adaptability in system responses. It’s crucial for AI designers to incorporate mechanisms that allow systems to handle unexpected behaviours or results gracefully. This might include designing AI that asks for user feedback if uncertain, or systems that adapt their operations based on new data or changing environments without requiring explicit reprogramming.
Moreover, the design must also ensure that AI systems are capable of explaining their decisions and actions in a way that is understandable to users. This is essential for critical applications where understanding AI’s decision-making process is necessary for trust and accountability, such as in healthcare diagnostics or autonomous driving.
Designing for probability also involves rigorous testing under diverse conditions to ensure that AI behaves as intended even when faced with unforeseen variables. This means that AI systems should not only be robust under typical conditions but also flexible enough to adapt to new, less predictable situations that may arise.
Ultimately, designing AI to handle probability effectively means creating systems that are both resilient and transparent. These systems must not only manage the unexpected but also communicate their processes and limitations to users, ensuring a reliable and trustable interaction with technology. This approach not only enhances user experience but also bolsters confidence in AI applications across various sectors.
Just the Beginning
The journey through defining, designing, and deploying AI underscores a critical narrative: the decisions we make today in AI design are pivotal. From the foundational definitions to the nuanced complexities of ethical design and ownership, every facet of AI development carries significant implications for both present and future interactions between humans and machines.
The principles of ontological design remind us that our creations shape our reality. As such, the responsibility lies with us, the designers, developers, and policymakers, to ensure that AI systems are developed with a keen awareness of their broader social, ethical, and cultural impacts. This includes meticulous attention to the data that trains these systems, the transparency of their operations, and the fairness of their outcomes.
AI’s potential to enhance and transform our lives is immense, but so is the responsibility to ensure it is integrated into society in a manner that respects and enhances human dignity and equity. By fostering an environment of trust through transparency, advocating for clear regulations on AI ownership, and prioritising ethical considerations in AI design, we can steer the development of AI technologies towards outcomes that are beneficial for all.
As we continue to explore and expand the boundaries of what AI can do, we should also commit to a path of responsible and thoughtful development. Let this be a guiding principle: to design AI not only with intelligence but with wisdom, ensuring that as our tools shape us, they do so in a way that reflects our highest ideals and aspirations.
Ioana Teleanu is designing Miro’s AI experience, follow her newsletter: AI Goodies, now