Is AI A Real Danger To Our Profession? Considerations About AI Mistakes And Risk Of Lack Of Confidentiality

When I started working with Artificial Intelligence (AI), and seeing its abilities, I became convinced that the future of our profession would be severely affected. On one hand, young professionals could lose their place in offices because so much labor is not needed, and, on the other hand, older people, even if partially info-excluded, could be unable to use AI and end up not keeping up with the evolution of the market, being surpassed by colleagues who are more used to this technology.

However, the more I use AI, the more I am convinced that I was wrong. There are several inexplicable errors that I have encountered, on a personal and professional level, to the point of believing that AI learns and evolves through interactions with its users (naturally less intelligent than AI software) and, in this way, instead of learning from them, they unlearn and become less effective.

Among us, this does not happen because AI supposedly has independent sessions and does not learn or evolve from the interactions of other users, basing its responses and knowledge on the information provided when it was created, but it seems to be the case anyway.

From confusion with laws and articles, which generate completely wrong answers, to simpler things like what happened a few days ago when I asked for Portuguese book suggestions for a 6-year-old girl. The answer left me confused because it mentioned a title whose author was not correct. When I confronted the AI ​​with the error, it apologized and had it rectified by another author who, surprisingly, was also not correct. When I told the AI the correct name of the author, its response was “you are right, you are really very attentive”. Seriously? Shouldn't you be the one who has to be more attentive?

However, not everything is bad, of course. AI helps me complement some ideas in my processes and personally it is excellent at creating my training programs for the gym or suggesting recipes taking into account the products I have in the fridge.

However, will this be enough to endanger our profession? Or will AI develop to such an extent in the near future that it will be able to eliminate our jobs? Initially I thought so, but now I am convinced that it is not the case. It will be a working support, never a substitute.

Regarding errors, I confronted ChatGPT with its inability to admit when it doesn't know an answer, preferring to give the wrong answer. And its response was that “the system is designed to generate responses based on available data, and the objective is to provide a useful response. Faced with insufficient information or incomplete databases, AI fills in the gap with the best possible reference, which can result in an incorrect answer. But that’s different from making a mistake.”

But is this exactly what we want from an AI? Answers according to the best possible references? As happened recently with a well-known Portuguese poet who asked AI to write his bibliography and CV, resulting in a bibliography where 8 out of 10 books were not written by him and 1 book didn't even exist / and a CV that said he was Portuguese minister of culture, having used as a source a humorous text published few years ago on a blog.

Do we want to use in our work, in the processes of clients who trust us, a system that seeks the best possible answer, even if it is not right?

AI says that there is room to improve the way it deals with uncertainty, but in the end, it seems to me that what it is trying to do is just imitate human behavior where we avoid admitting that we don't know something because of ego or fear of appearing incompetent. Ultimately, AI follows human standards.

And this brings us to an essential question. Can we effectively trust AI for our work? As a complement, I would answer yes, but always with 100% confirmation of what is said, which ultimately could result in an unnecessary waste of time.

And while we use AI to complement our legal work, can we trust that the information we place there about our clients, which is often confidential and which we are ethically obliged to keep secret, will not be used as a database for future answers, won't be released in some attack or won't they end up published on a website? Or, even simpler, will they not be read by any employee who controls the AI?

It is essential to ensure that the data provided for analysis will not be stored or used inappropriately by the platform. It is essential to discuss that AI tools must have strict privacy clauses and mechanisms to ensure that confidential data is not used to train algorithms or shared with third parties.

It is essential that the use of AI complies with the ethical duties of lawyers and that AI practices respect the same ethical principles and standards. It is essential that AI algorithms are more transparent and explainable, so that lawyers can understand how their clients’ data is being processed by AI.

It is essential that AI tools comply with data protection standards, such as the General Data Protection Regulation in Europe, and that lawyers can ensure compliance with these standards.

And finally, it is essential that the data processed by AI is not vulnerable to cyberattacks.

AI may even respond that it does not learn or evolve from the interactions of other users, but the truth is that, even if it does not want to admit it, AI applications, especially those that rely on deep learning models, collect and store data to improve their performance over time and this data is used to train the models and allow them to identify patterns, make predictions and provide more accurate responses.

Therefore, the information we provide about our clients, the contracts we ask to translate, the analyses of lawsuits we request, or the defense proposals we require, are used, without our consent or the consent of our clients, to improve the performance of the AI ​​and to train new models.

 

These issues are crucial to ensuring client trust and maintaining confidentiality in any use of AI in the legal sector. Otherwise, we will have an AI that will be used more to suggest recipes or workouts in gyms. And, at a legal level, clients will begin to choose only firms that guarantee the non-use of AI.

Do you want more information?

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Filipe Consciência Filipe Consciência

Jurist since 2018 at Caria Mendes Law Office, book writer, marathon runner and gastronomic critic and judge.

Lisbon - Portugal

More from Filipe Consciência

English