Harnessing Neurodivergence: My Journey in AI, Ethics, and Innovation

My background is as a data scientist who is passionate about building AI solutions. I am autistic and have ADHD, so I’m neurodivergent. And I have a passion for AI ethics. How are AI, neurodiversity, and ethics connected? Very intimately. Neurodiversity is important because diversity of thought is important for innovation. There is nothing more diverse of thought than having diversity of different mind types themselves.

My Neurodivergent Journey

I was diagnosed with what was then called Asperger’s Syndrome in the year 2000 - I now simply identify as autistic. I was also recently diagnosed with ADHD (at the time of original diagnosis, it was impossible to be diagnosed as both simultaneously). It took quite a while for me to embrace my neurodivergence - it wasn’t until grad school that it picked up. Particularly, I set up and ran a science summer camp at Penn State for autistic high schoolers to inspire autistic high school students to go to college. It was an inspirational and emotional experience helping autistic kids, and it was the first time when I publicly went out as a neurodivergent.

Once I joined IBM after grad school, I quickly found what was then #autism-at-ibm, an internal public channel dedicated to autism which eventually became #neurodiversity-at-ibm to expand neurodiversity. The person who became my mentor at IBM, Beth Rudden (now the CEO of Bast.ai) found me and quickly educated me on what became among the important needs for me in the company: educating me on the politics and maneuverings of large businesses.

In the meantime I got more active in the autism business resource group (BRG) which evolved into the neurodiversity business resource group. The BRG leaders I educated them on differing aspects of neurodiversity and the language of the BRG was quickly updated to progressive up-to-date language on neurodivergence (identity-first language, no puzzle pieces, neurodivergent for individuals and not neurodiverse, etc) as I was active in the online neurodivergent community myself. I co-founded the #actually-autistic private channel, a safe space for autistic IBMers which we used as a focus group for initiatives related to autism in the company. The channel turned out to be tremendously important as the existence of a safe space allowed autistic employees to be more comfortable being out inside the business in things such as panels and education for caretakers of autistic people.

As the BRG transformed into Neurodiversity@IBM, I became one of the co-chairs of the business resource group as one of the BRG leaders who stepped down, who wasn’t neurodivergent himself, wanted an autistic person to take the reigns. We created the #actually-neurodivergent safe space channel, another safe space channel designated for all neurodivergent IBMers (not just autistics), and that turned out to be transformational as a community - at its height at the time I was at IBM, it reached up to ~450 members. Both of these channels covered people ranting about activities they couldn’t otherwise in the company (e.g. manager disputes) where they could seek support from those close to the HR system, tips from other neurodivergent people (ADHD was among the most common in AND statistically so there were a lot of ADHD tips) and more. Both of these channels were heavily intersectional; there was a significant intersection between the neurodiversity community and LGBTQ+ community in the company. We discovered anecdotally people being diagnosed later; neither of these channels required diagnosis, only identity, so we had people joining who were in their 50s+ who got diagnosed or realized they were autistic or their neurodivergent identities much later in life.

Our BRG hosted many intersectional events to inspire inclusion with the other communities at IBM (Black, Pan-Asian, Native American, Women, Hispanic) and I am proud of the community that we had made. My work was primarily in the context of engagement of the safe space BRGs, but we held a lot of educational events throughout the company, especially the need to increase education outside the Anglosphere where neurodiversity is far less known (India, Japan, etc) - something I was proud of being in such an international company.

The Intersection of AI, Ethics, and Neurodiversity

My work at IBM wasn’t only tied to the work at the business resource group, of course - my day job was as a Data Scientist, after all. I also participated in the Academy of Technology where we had several initiatives related to data science and AI ethics. And that was important to me - as a neurodivergent, AI ethics was important to me due to the potential paths for discrimination there are in this space. For example, personality tests and AI face monitors used in the interview process in business can and will negatively impact autistics - personality tests because they self-select against neurotic personality traits (correlated with neurodivergence) and face monitors due to the different body language of autistics relative to the norm which could be discriminated against simply because the ground truth would discriminate against it.

After I watched the Netflix documentary “Persona”, which covered the systemic discrimination of personality tests towards neurodivergent people, I worked with the Neurodiversity BRG to talk to the HR system to work on improving the screening/interview process at IBM (and thankfully, we had not been using any AI screening by that point) - one key win that we got was the removal of the need for documentation for accommodations for the interview process.

In the realm of AI development, particularly in areas intersecting with human resources, ethical and inclusive considerations are paramount to ensure fairness, diversity, and equity in the workplace. The evolution of AI and machine learning technologies has undoubtedly transformed many aspects of business operations, including the recruitment process. However, this transformation brings with it a responsibility to guard against inherent biases that may inadvertently perpetuate discrimination, especially against neurodivergent people.

The importance of inclusive AI development is multi-faceted:

  1. Reduction of Bias: Traditional AI models, including those used in personality tests and facial recognition for interview processes, can harbor biases based on the data on which they were trained. These biases can lead to the exclusion of neurodivergent candidates, not due to a lack of skills or capabilities, but because of characteristics that are unrelated to job performance. Ensuring that AI technologies are developed with an inclusive dataset and regularly audited for biases is critical in minimizing discrimination.

  2. Diverse Workforce Benefits: Diversity in the workplace, including neurodiversity, has been shown to enhance creativity, innovation, and problem-solving capabilities. By creating AI tools that are mindful of neurodivergence, companies can tap into a wider talent pool, fostering environments where different perspectives are valued and leveraged for collective success.

  3. Legal and Ethical Compliance: There's an increasing awareness and regulatory emphasis on digital accessibility and anti-discrimination. Ethical AI development aligns with these legal frameworks, ensuring that companies not only comply with regulations but also embrace the spirit of inclusivity and equity.

  4. Brand Reputation and Employee Loyalty: Demonstrating a commitment to ethical AI and inclusivity can enhance a company's reputation as an employer of choice and can lead to higher levels of employee engagement and loyalty. This is particularly relevant in competitive industries where attracting and retaining top talent is crucial.

  5. Customizable and Flexible Solutions: By considering the needs of neurodivergent individuals in AI development, technologies can be designed to be more adaptable and customizable. This approach can benefit all users by providing more personalized and effective tools for a variety of contexts, including recruitment, onboarding, and ongoing support.

To achieve these benefits, companies must engage in continuous dialogue with neurodivergent communities, experts in AI ethics, and legal advisors to ensure that AI tools are developed and implemented in a manner that respects the diversity of human experiences. This involves not only the initial design and development phases but also continuous monitoring and revision of AI systems to address emerging biases and barriers.

The shift towards more inclusive and ethical AI development requires concerted efforts across industries. It necessitates a change in mindset from viewing AI as merely a tool for efficiency to understanding its broader implications on societal equity and diversity. By prioritizing these values, companies can lead the way in creating a more inclusive future, where technology serves as a bridge rather than a barrier to opportunity for all people, including those who are neurodivergent.

Causal AI and Ethical Implications

Causal AI can play a crucial role in identifying and mitigating discrimination against marginalized communities. This is because traditional methods that rely on correlation often fail to uncover discriminatory practices within algorithms. They cannot understand the underlying causes of discrimination, as they do not consider how the data was produced. An illustrative instance of this limitation is Simpson's paradox, which highlights how statistical inferences made from individual groups versus the entire population can lead to different conclusions. In contrast, proving discrimination typically involves establishing a direct cause-and-effect relationship between sensitive characteristics and controversial outcomes, rather than merely identifying patterns or associations between them.

A notable illustration of this principle is the analysis of graduate admissions at the University of California, Berkeley, in 1973[1]. Statistical analysis of the historical data revealed that 44% of male applicants were accepted compared to 33% of female applicants. Further investigation revealed that a higher percentage of female applicants chose to apply to more competitive programs than their male counterparts. Yet, this observation does not resolve the issue of discrimination; for instance, it does not explain why females were more inclined to apply to these competitive departments. Understanding the causal mechanisms behind such patterns of discrimination—why they occur, based on the process that generates the data—is crucial for identifying and addressing the root causes of discrimination.

Another crucial aspect where causal AI can significantly contribute is in promoting counterfactual fairness within AI systems. Counterfactual fairness[2] goes beyond traditional notions of fairness by ensuring that an AI decision would remain unchanged if a sensitive attribute about an individual (such as race, gender, or disability status) were altered, all else being equal. This concept relies on the ability to model and understand hypothetical scenarios or "counterfactuals," which is central to causal reasoning. By applying causal inference techniques, developers can simulate how changes in these sensitive attributes might affect the outcomes of AI decisions, thereby identifying and correcting biases that traditional statistical or correlational methods might miss.

In the Academy of Technology at IBM, our project on Systemic Equity focused on enhancing process pipelines—for instance, from recruitment to attrition—for marginalized groups. A key insight from our work was the pivotal role of causal mechanisms in both uncovering and addressing systemic inequities. Causal mechanisms allow us to trace and understand the root causes of disparities within organizational processes. By identifying these underlying causes, we can implement targeted interventions that not only address the symptoms of inequity but also tackle the structural factors perpetuating these disparities.

For example, if an analysis reveals that a specific stage in the recruitment process disproportionately filters out candidates from marginalized backgrounds, understanding the causal factors at play—be it biased assessment criteria, reliance on non-inclusive sourcing channels, or inadequate representation in decision-making panels—enables us to make informed adjustments. These might include revising evaluation metrics to be more inclusive, diversifying recruitment channels, or altering the composition of selection committees to ensure broader perspectives.

Moreover, causal analysis helps in preempting potential inequities by allowing organizations to model the impact of various policies and practices before their implementation. This proactive approach to equity ensures that systemic biases are not inadvertently embedded into new processes or technologies, fostering a culture of continuous improvement and inclusivity. For example, a study has been done to undercover structural racism using quantiative causal inference.[3]

Integrating neurodivergent perspectives into the development and application of causal AI can significantly enhance its capacity to identify and rectify ethical issues, particularly in reinforcing counterfactual fairness and systemic equity. Neurodivergent people (like myself) often bring unique viewpoints and sensitivities to the table, shaped by their diverse experiences with navigating a world not always designed with their needs in mind. This unique lens can be invaluable in pinpointing subtle biases and overlooked ethical considerations in AI systems. For instance, in the pursuit of counterfactual fairness, neurodivergent insights can help to more accurately model the myriad ways in which sensitive attributes intersect with societal biases, ensuring that AI decisions do not inadvertently perpetuate discrimination under hypothetical scenarios where these attributes are varied.

Moreover, as we've seen in efforts like IBM's Academy of Technology project on Systemic Equity, understanding and adjusting for the complex causal networks that lead to disparities requires a broad and inclusive perspective. Neurodivergent people can identify potential barriers and biases in processes and technologies that might not be evident to neurotypical developers and analysts. Their contributions can guide the design of causal models and interventions that not only aim for surface-level fairness but also address deeper structural inequities. This inclusive approach to causal AI development not only makes ethical sense but also enriches the AI systems we build, making them more robust, fair, and reflective of the diverse society they serve. Engaging with neurodivergent perspectives ensures that AI development is not just about avoiding harm but actively contributing to a more equitable and understanding world.

Creating a Responsible and Inclusive Tech World

In today's rapidly evolving tech landscape, fostering an environment that values neurodiversity is not just a moral imperative but a strategic advantage. Companies and organizations looking to integrate neurodiversity into their culture and operations can adopt several strategies to ensure a more inclusive, innovative, and responsible tech world. Here are actionable steps to achieve this goal:

  1. Tailored Recruitment Practices: Adopt recruitment practices that recognize and accommodate neurodivergent traits. This can include offering alternative interview formats, providing clear and detailed job descriptions, and using recruitment channels that are actively engaged with neurodivergent communities.

  2. Inclusive Workplace Environment: Create an inclusive workplace that accommodates diverse needs, such as quiet workspaces, flexible working hours, and access to support services. Encouraging open dialogue about neurodiversity and providing education on the topic can also help build understanding and support among all employees.

  3. Ongoing Training and Acceptance: Implement regular training sessions for staff at all levels on the benefits of neurodiversity and how to support neurodivergent colleagues. Acceptance initiatives can help dispel myths and reduce stigma, fostering a culture of inclusivity and respect.

  4. Business Resource Groups (BRGs): Support or establish BRGs for neurodivergent employees and their allies. These groups can offer a forum for sharing experiences, discussing challenges, and advocating for workplace changes that benefit neurodivergent people.

  5. Accessible Technology and Tools: Ensure that workplace technology is accessible and customizable to meet diverse needs. This might include software that supports different learning styles or communication preferences, as well as physical accommodations in the workspace.

  6. Feedback Mechanisms: Create safe and accessible channels for feedback from neurodivergent employees on their workplace experience. This feedback should be actively used to make continuous improvements.

Integrating neurodiversity into the tech industry is crucial for building a world that is not only innovative but also equitable. Neurodivergent individuals often possess unique skills and perspectives that can drive innovation and problem-solving. By creating spaces that welcome these perspectives, the tech industry can develop solutions that are more reflective of and responsive to the needs of a diverse user base.

Moreover, an inclusive approach to neurodiversity signals a broader commitment to responsibility and equity in technology. It challenges the industry to think critically about whom its technologies serve and the societal impact of its innovations. In doing so, it contributes to a tech world that prioritizes the well-being and dignity of all individuals, particularly those from marginalized groups.

Conclusion

From my perspective as a neurodivergent data scientist deeply passionate about AI ethics, here I underscore a fundamental truth: innovation thrives on diversity of thought, and neurodiversity is a key driver of this diversity. My journey, from embracing my neurodivergence to advocating for neurodiversity at IBM, illustrates the profound impact that inclusive environments and practices can have on individuals and organizations alike. Through my work, particularly in the realms of AI development and ethics, I've seen firsthand the potential for discrimination in AI applications and the critical need for ethical considerations to guide AI development. This is especially true for neurodivergent individuals, who may be uniquely impacted by biases in AI-driven processes.

The exploration of causal AI and its role in identifying and mitigating discrimination against marginalized communities highlights the importance of integrating neurodivergent perspectives into AI development. These perspectives not only enrich our understanding of ethical issues in AI but also contribute to more equitable and inclusive AI systems. By sharing strategies for integrating neurodiversity into company cultures and operations, I aim to emphasize that creating a responsible and inclusive tech world is not just a moral imperative but a strategic advantage that fosters innovation and problem-solving.

In summary, this blog post is a call to action for the tech industry to prioritize diversity, equity, and inclusion—not as buzzwords, but as fundamental principles guiding the development of technology. By valuing and integrating neurodivergent perspectives, we can build a tech world that is not only innovative but also equitable and inclusive of all, especially those from marginalized groups.

References

  1. Su, C., Yu, G., Wang, J., Yan, Z., & Cui, L. (2022). A review of causality-based Fairness Machine Learning. Intelligence & Robotics, 2(3), 244–274. https://doi.org/10.20517/ir.2022.17

  2. Kusner, M. J., Loftus, J. R., Russell, C., & Silva, R. (2018, March 8). Counterfactual fairness. arXiv.org. https://arxiv.org/abs/1703.06856

  3. Graetz, N., Boen, C. E., & Esposito, M. H. (2022). Structural racism and quantitative causal inference: A life course mediation framework for decomposing racial health disparities. Journal of Health and Social Behavior, 63(2), 232–249. https://doi.org/10.1177/00221465211066108

Next
Next

Maximizing Profits for Microbreweries: A Decision Intelligence Case Study