High-tech legislation through self-regulation

Denas Grybauskas, head of legal at Oxylabs, discusses the important role that self-regulation can play in high-tech legislation

A quick glance over our technological, scientific, and productive history over the past few decades shows a trend towards increasing specialisation. Getting into an area and becoming a true expert in it takes considerably more time than it did several decades or centuries ago.

Business, while progressing slower towards the same trend, is still experiencing something similar. Explaining in-depth technical concepts with sufficient detail and nuance to a layman is becoming more troublesome. Machine learning is one such example – frequently used, but scarcely understood by people outside the technical world.

Unfortunately, legislators cannot be experts in every field, but there seems to be an implicit request that they be so. After all, how can we enact fair and just laws or even principles without a proper understanding of the topics at hand?

The challenge of technological progress

Practical applications of technological progress usually encompass several distinct things into one. Web scraping, for example, is reliant on a multitude of advancements in computing, starting from the obvious ones such as internet connection speeds and ending with slightly less evident ones such as proxies.

Fortunately, current iterations of web scraping are still relatively uncomplicated and can be, with some effort, explained to laymen. It is advancing at a restless pace, though, as machine learning and artificial intelligence solutions begin to be included in the process.

My issue isn’t that things are getting more complicated over time. My issue is that our ability to explain those things isn’t getting much better at the same time. As a result, technology is slowly “running away” from laymen. Some of those laymen are legislators.

A simple solution to the problem might seem apparent: have legislators seek counsel from experts and advisors for anything complicated. Legislators sometimes do this both at EU and US institutions. I celebrate such an approach, but by (partly) solving our current issue, it still raises a few questions.

One of them should be a question of the identity of the experts. There is certainly a risk that they might be representing only the positions and ideals of the largest corporations, which can sometimes resemble large bureaucracies.

The other issue, which I have already touched on and will continue expanding upon, is timeliness. Each consultation, meeting, and explanation takes time and money, slowing down an already, presumably, sluggish process even further.

Explaining complex topics and drafting suitable legislation takes time. For example, the famous EU General Data Protection Regulation (GDPR) took over four years to be adopted from the moment first preparative texts were proposed by the European Commission. It’s a process that cannot be rushed as the livelihood of uncountably many people depends on it. One piece of legislation can influence our understanding of justice for decades to come.

Slow and steady

I don’t contend with the necessity of having experts on hand. In fact, I think quite the opposite – it’s one of the most important features of the legislative process in the current year.

In Lithuania, numerous institutions take advice from businesses, CEOs, and experts. Such a process results in better — that is, less oppressive and more accurate — laws. We can’t deny, though, that it does slow down an already sluggish process. However, as long as the institutions truly listen to the advice of the said people that actually sit at the front lines of innovation, the slowness might be justified.

But the train of progress chugs on regardless. Naturally, new industries develop with technologies previously unseen, and these can get quite large before any legislation is enacted. I think you may know better than me about how much technology changed in, say, the last four years. In the end, a “lag” becomes apparent as businesses rush forwards while the law attempts to catch up.

Our industry is the perfect example of such an occurrence. No direct legislation on web scraping has been published. We can closely follow case law, but what are the judges to do?

In industries where no direct legislation exists, judges have to rely on a multitude of secondary factors, putting additional strain on them. In some cases, they might be left only with the general principles of law.

In web scraping, data protection laws, e.g. GDPR, became the go-to area for related cases. Many of them have been decided on the basis of these regulations and rightfully so. But scraping is much more than just data protection.

Case law, mostly from the US, has in turn been used as one of the fundamental parts that have directed the way for our current understanding of the legal intricacies of web scraping. Although, regretfully, that direction isn’t set in stone.

Yet, using such indirect laws and practices to regulate an industry, even with the best intentions, can lead to unsatisfying outcomes. A majority of the publicly accessible data is being held by specific companies, particularly social media websites.

Social media companies and other data giants will do everything in their power to protect the data they hold. Unfortunately, they might sometimes go too far when protecting personal data. Instead of truly empowering users with the ability to control their data, they sometimes might overuse data protection laws to keep it all for themselves.

Self-regulation

It would be fair, I think, to assume that most businesses will pursue what is in their best interests as long as it’s legal. There are more cases like web scraping, where industries, at least in part, exist outside of regulation (not necessarily against it) and rely on the goodwill of businesses.

Self-regulation, then, lies as the temporary solution to these issues while businesses wait for legislation to catch up. There have been remarkably successful cases of self-regulation such as the American Bar Association and The International Council for Ad Self-Regulation (ICAS).

High-tech industries, however, have not only the perfect opportunity for such an approach, but some benefits as well. A lot of developments in these sectors, web scraping included, can be used for the public good. They, however, are rarely used by the public, by governments, or by journalists due to the lack of regulation.

Companies, engaging in self-regulation, show goodwill towards the rest of the world. Actions like this show willingness to share the beneficial side of high-level technology while restricting possible misuse. Those who are interested in fairness in the proxy and web scraping industry should stay tuned to further action.

Written by Denas Grybauskas, head of legal at Oxylabs

Related:

A guide to IT governance, risk and compliance — Information Age presents your complete business guide to IT governance, risk and compliance.

How the regulation of big tech can affect your business — The UK’s pending Online Safety Bill and the EU’s Digital Services Act are designed for the regulation of big tech, but there is the issue of legal but harmful and unintended consequences that can affect your business.

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com