Phase 1, Post 7: Updates on Technology I'm Grappling With

Updates #1:
Possible ways to address diversity in the field of natural language processing (NLP):
  1. Implementing an Institutional Review Board (IRB) has not yet been considered in the field of NLP, but is a generally fit way to address potential and current ethical problems.
  2. Directly conducting some form of research with people (best from diverse backgrounds) would not only personalize the algorithms behind NLP, but also open up ethical discussions around the technology as a result of the direct involvement of human subjects.
  3. Implementation of vernacular would make it more natural and inclusive.
The people that would be most affected by these suggestions would generally be people who speak in heavy vernacular or who speak the processed language as a second language.  I believe that is what is most up in the air in the field of NLP.  As with any emerging technology, the developers end up holding a social responsibility, dictating how this technology will be used and by who depending on how they proceed.  In proceeding with natural language processing, they decide what language is to be considered natural.  This can open plenty of discussions about the implications of what computers will deem as correct language.

Updates #2:
I entered "natural language" processing and "privacy" into my search engine.  While I was not the member of my group who selected "privacy," I thought it would be relevant as NLP is a sub focus of data science.  I came across Why is Privacy-Preserving Natural Language Processing Important? by Patricia Thaine, which brings up the storage of your communications data, and how services remain free through selling this data.  Thaine brings up a user's "digital footprint" and how it is important to be cautious about handing over all of your information.

After searching with keyword "privacy," I wanted to go back to my topic, so I searched NLP with keyword "ethnicity."  On Medium, I found the article Examining Gender and Race Bias in Sentiment Analysis Systems.  I believe the article directly addresses the topic I am planning my midterm project around.  The author sums up the motivation for the article with, 
"Automatic systems are beneficial to society but as they improve in predictive performance, comparable to human capabilities, they could perpetuate inappropriate human biases."
Finally I researched what companies do to prevent this twisted diversity.  A worker at Automaticc described their workarounds to biases in these systems, saying simply that it required "constant vigilance."

Comments

Popular posts from this blog

Phase 2, Post 13: Ansh Patel & Jordi Frank

Phase 2, Post 10: Chris Woods

Final Project