In 2018, scientist and professor at Ben Gurion University, Yuval Elovici, gave a TedTalk on the dangers of technology, particularly the Internet of Things (IoT). Since then, we have become more aware of the potential threats that technology poses. Case in point, I recently had a chat with my friends about matcha on WhatsApp, and suddenly my TikTok and Instagram timelines were overflowing with matcha shop suggestions.
While I was happy to discover new matcha shops, it was also alarming to think about how our privacy and personal data can be compromised (the "Weapon of Math Destruction" or WMD). Another example of a WMD is scheduling software used in HR processes. Although it can simplify the manager's tasks, it can also lead to negative consequences.
The scheduling software has been created to maximize efficiency and profitability, and this aligns with the principles of capitalism. For instance, let's consider the case of Linda, who is a single parent and the sole breadwinner of her family. She works as a cashier in a supermarket store and it's impossible for her doing the remote work. However, there may be days when her child falls sick, and she still has to adhere to her scheduled work hours.
NYU's Human Capital Analytics & Technology offers a course on Algorithmic Responsibility. This class taught me how to be more critical of any kind of technology we use (IoT, automation, AI, machine learning, natural language processing, etc.). With the emergence of these new technologies, we need to anticipate the possible negative outcomes they could bring.
We can start developing a responsible algorithm by:
Using data with the purpose,
Scheduling regular system reviews,
Enhancing data security, and
Establishing an AI Ethics Board and implementing related policies. The AI Ethics Board can act as an advisor for the organization in monitoring the implementation of technology, to bring fairness and transparency.
Comments