Idecided to take an unconventional approach to the editorial for the summer edition of our EY Tax and Legal News. I’d first like to introduce you to the difficult intellectual process of creating such an editorial. I don’t know exactly how it is for my esteemed colleagues, the partners of EY’s tax department, but for me the hardest part is always the beginning, i.e. finding a technical, but at the same time sufficiently engaging, topic that would appeal to a wide range of our readers and that I could treat lightly (not so technically). This year, my long-time colleague and friend Vláďa Sopkuliak, for many years an excellent technical editor, editor-in-chief and good soul of our publication, helped me a lot with this process. Vlád’a knows me, so he treaded lightly: “You’ve got an editorial coming out in the summer issue. I know you’ve got a lot on your plate, so feel free to keep it short, like an optimistic greeting for the summer, or a list of legislative changes coming our way in the fall.”
Thanks, Vlád’a, that shouldn’t be difficult. But what legislative changes should I mention so as not to scare readers and spoil their summer? For example, the Corporate Sustainability Due Diligence Directive, which was approved this spring and which brings new extensive reporting obligations, will probably scare readers. Similarly, new European legislation bringing a whole raft of obligations in relation to deforestation regulation is likely to do the same. A little optimism could perhaps be created by talking about the amendment to the Czech law on conversions, which introduces divisions by separation as a new category of corporate conversion. But again, this is too technical a topic for the summer and my colleagues and I have already written several technical alerts on it. Finally, I was most interested in the topic of artificial intelligence, i.e. AI, the regulation of which EU lawmakers are directing a substantial part of their attention to under the so-called AI Act, or the EU Act on Artificial Intelligence, which should be the world’s first regulation regulating AI.
I myself see EY as a huge opportunity to accelerate technological progress and make routine activities easier for people. Within EY, we have developed our own Chat GPT-based software (isolated from the internet world to safeguard data confidentiality) and are currently experimenting around the world on how to use AI to improve our client services. But with the summer holiday season upon us, I’ve slipped back into playing with AI, which I’ve also been enjoying lately. I certainly don’t mean to say that writing an editorial like this is a routine activity for me, not at all, but I was curious how AI would handle it.
So I gave our special EY AI chatbot the following task:
“Would you please try to design the text of the editorial for EY’s publication of tax law articles to capture the upcoming holiday season and be a little lighter and more humorous?”
Within 3 seconds, the chatbot suggested the following:
Dear Friends and Business Associates,
Summer temperatures may make our heads swelter, but at EY we keep our cool to bring you the best of our professional world. In the summer edition of Tax & Legal News, we’re serving up fresh information that’s as refreshing as a glass of iced lemonade on the beach.
While you’re enjoying a well-deserved holiday and sinking your feet into the warm sand, we bring you an overview of the most important developments in the tax and legal world. And don’t worry, even though everything seems lighter in the summer, there are no cold showers waiting for you in the autumn. With us, you’re safe and ready for whatever may come.
So, lots of sunshine, minimal paperwork and let every sunset remind you that even the most complex tax and legal mysteries have a solution. And if not, there’s always a beach umbrella full of expert advice.
With a smile and sunglasses,
[Your name] Consultant, EY
So what do you think? I find it quite good from a “robot”, even somehow human. It’s a bit scary... . In terms of AI’s human response, I’ve previously heard, for example, that as part of a commission to carry out market research and get the best bids from contractors, AI was able to claim to be a business employing disabled staff in order to get better bids from its human contractors. So if AI is capable of playing on human emotions and will continually improve in this ability (which is the principle of AI), then I guess some sensible regulation would really be needed so that we don’t end up approaching the terrifying prognosis of Skynet from The Terminator, which for a long time was the only scary idea for my generation about how AI works.
But what does AI itself think about self-regulation? I tried again and asked our EY chatbot the following question:
Hi, what do you think about the upcoming AI Act within the European legislation and AI regulation in general? How real is the risk of AI getting out of human control and causing a catastrophe similar to how Skynet is portrayed in Terminator 2?
Again, approximately 4 seconds later, the chatbot came back with the following response:
Hello! The discussion about the upcoming AI Act in the framework of European legislation is very important because it tries to set rules for the ethical and safe development and use of artificial intelligence. The aim is to protect citizens from the potential risks that AI systems can bring, while promoting innovation and competitiveness in the field of AI.
The regulation of AI is a complex topic as it must balance the protection of the public, privacy, fundamental rights, while allowing for progress and development in this rapidly evolving field. Some experts and organisations welcome regulation as a way to ensure transparency, accountability and trust in AI systems, while others worry that overly strict regulation could hamper innovation.
As for the risk of AI spinning out of control and causing a disaster similar to Skynet in “Terminator 2,” most AI experts consider this scenario highly unlikely, at least for the foreseeable future. Most current AI systems are “narrow AI”, meaning that they are designed to perform very specific tasks and have no consciousness or motivation of their own. The risk of uncontrolled AI is more likely to be associated with potential programming errors, misuse of AI for malicious purposes or unintended negative impacts on society.
Nevertheless, it is important that safety protocols and ethical guidelines are established to prevent possible negative scenarios. AI safety research focuses on how to ensure that AI systems are reliable, safe, and that their actions are consistent with human values and interests.
Does this summary, with its emphasis on the absence of risk “at least for the foreseeable future”, reassure you? Let’s collectively believe that AI regulation will be successful, programming errors will not occur, and risk will be eliminated in the distant future. Meanwhile, I wish you a peaceful summer and plenty of rest from work worries.