Introduction
The Center for Humane Technology (CHT) is a nonprofit organization cofounded by Tristan Harris, Aza Raskin, and Randima Fernando to inform the public about the disasters of technology and provide ethical alternatives for all of society.
To develop a clearer picture of the center, one has to understand their beliefs. Longtermism is a belief the center never fully acknowledges but uses throughout their work. It’s woven throughout The AI Dilemma and used on multiple occasions. Another belief the center uses is humanism, which places responsibility on humans to regulate AI.
Digital security is a main concern for society, and thus, for the CHT. Through machine learning programs, the risk of data being stolen and sold has only grown greater. With its reputation and size, the center has the right platform to inform society of these dangers.
Addressing the relationship between artificial intelligence (AI) doomerism and the long-term outlook of the CHT, this case study explores common worries over the moral consequences and ultimate hazards of AI in addition to its philosophical beliefs and security concerns.
Conclusion
The center uses longtermist thinking in the way it was meant to be used. The work done in The AI Dilemma and by Tristan Harris plant them on the fringes of longtermist belief. They use it primarily to develop a sense of fear in the minds of their audiences in hopes that it will push them to make change. However, their use of longtermism prevents them from noticing the real dangers of AI to current society and as a result, it prevents their work from producing tangible results.
One such real danger is digital security, and despite the multiple examples of how beneficial AI and machine learning are in modern-day digital security, the risks associated with careless implantation cannot go unnoticed. Companies rush to whomever can get their version of AI integrated into their products. While this speeds up product advancement, improper regulation leads to disaster. Whether through weak cyber security or through a complete corruption of data at the beginning of an AI’s lifespan, companies must be ready for these attacks or be able to take responsibility for their inability to react in time. The Center for Humane Technology’s stance on digital security in regards to AI is genuine, and it does warrant public education to prepare and/or defend everyone from attacks.
Although the cofounders have created a humanistic environment to educate the public on better ethical and informed decisions concerning technology and cyber attacks, there is still a long way to go. Its environment extends to cofounders who also use AI to possibly better the planet for future generations. By focusing on human behavior, the center continues its mission by partnering with those who share their beliefs.
With that in mind, the CHT and those who believe in AI doomerism are in agreement that navigating the ethical minefield of AI is a shared goal. This study emphasizes the significance of establishing a balance between risks and rewards for a humane and environmentally friendly artificial intelligence future, acknowledging concerns while supporting sustainable progress.



