10 Comments

The most important part of this narrative for me is how licensing and regulation will cripple small businesses and empower big tech companies. It’s already so hard and expensive to start a successful software business. As a working class, we’re already at a severe disadvantage because of how much of our digital lives and the internet is in control of a few big tech companies and telecom providers. We need more startups and open source competition in this space to help distribute power and wealth.

The important part of this for consumers to understand is that developing and distributing AI systems and ML models is going to continue to get (exponentially) easier and cheaper. It’s becoming just as easy as building a website or mobile app and think about how many of those exist.

Similarly, AI and ML systems can be built and deployed on any computer. Most people have several in their home. Computing is ubiquitous and so will be these systems. This also means anyone can build these systems and hide their development as well as anything else you can hide on a computer.

Expand full comment

Licensing or any other solution focused on the technology will not work simply because the problem is not technology but the economic system that requires it to be exploited. Witness the spectacle of the brightest minds researching AI simultaneously warning of the potential for extinction and begging the government to slow them down or even stop them. If AI is so dangerous, why not just stop? Because our economic system requires they continue, regardless of the consequences. Until that problem is solved, nothing else will work.

Expand full comment

Good article.

There is one element missing. This pre supposes a democratic non autocratic government controlling companies based in their territory.

That does not apply to authoritarian countries like China or Russia, where they might in secret develop evil AI. How to keep the intelligence and military communities in check ?

Expand full comment

You have given good reasons why citizens should oppose regulation of AI and why governments will definitely go down that path.

Expand full comment

This article reads like the feverish panic of a kid who is worried his toys will be taken away forever.

If regulation limits computational density at OpenAI levels, no small business will be hindered by that regulation because no small business is ever going to reach OpenAI's scale of computational density. Let us not forget that the NEC is stratospherically more complex and burdensome than any software regulation realized to date (how much torque must the screws have, with what metallic composition?), and yet your friendly neighborhood electrician is not out of a job. The idea that this somehow hurts small businesses more than big ones is simply not based in reality (note the lack of examples or citations). If in 5 years, I can buy a computer that trains AI as well as OpenAI's GPU-farm supercomputer of yesterday, then in 5 years OpenAI could have something exponentially more powerful/dangerous. Computational density limits *level* the playing field, and don't require abiguous unstated "extraordinary powers" any more than the FBI can sniff out homebrew bomb manufacturers, especially considering how few chip fabs exist. The author of this article is thinking much too small-scale - GPT 4 is not good enough to be dangerous. Any scale of AI that any of us could achieve without billions and billions of dollars is not the scale that anyone is proposing to regulate against.

Expand full comment

The data center surveillance would not require an "unprecedented" amount of cooperation. They already cooperate more on drugs, financial enforcement, various hazardous materials, and probably other stuff. You talk about million dollar spends, but $10k financial transactions are already routinely surveilled.

But it's true that licensing isn't the way. A total ban is the way.

Expand full comment

re: competition and "monoculture" and "Defining the boundaries of acceptable speech"

It seems that the issue of "defining the boundaries of acceptable speech" is one they are trying to do in a 1 size first all fashion, a monoculture single AI (or few). OpenAI and Anthropic both want "democratic" input for their models: which risks merely being another 1 size fits all monoculture AI. It risks tyranny of the majority, or Taleb's "dictatorship of the most intolerant minority" special interest depending on how they work that.

Many are aware of speech controversies around the world for instance regarding many Muslims that consider it unacceptable to have an image of Mohammed. Will AI create such an image? There is an ongoing culture war now in American school districts regarding what to teach children, imagine the global culture and religious war over determining the values of a 1 size fits all AI.

One answer is through a myriad of AIs (or an AI able to take on many fronts) for varied subcultures around the world rather than 1 size fits all monoculture. This page goes into using democratic input as guidance for such things rather than control and restriction:

http://RainbowOfAI.com

Competition will alleviate many problems, rather than regulation likely to help the big players, even if OpenAI claims it isn't trying to squash startups. A big factor is the potential for holding vendors liable for damage done by false, perhaps libelous, statements of AI systems. There may be a need for an equivalent of Section 230 to protect the AI industry and make clear that users are responsible for how they make use of what a tool outputs. This page goes into these issues, and the importance of competition to prevent for instance the 2 major office suites having just 2 AI's that nudge most of the writing around the world:

https://PreventBigBrother.com

Expand full comment

You only have to look at the early 20th century to see what lack of regulation creates. Companies form cartels and fix prices. Other companies buy up or bankrupt their competition and become monopolies. Big companies starve out small competitors by their control over markets, transportation, and support industries. Total laissez-faire capitalism does not end well for the consumer or small companies.

There is no competition without regulation.

Poorly implemented regulation *can* inhibit competition, but it's not an inevitability. Steering the public conversation to treat regulation as a binary choice helps companies like OpenAI by making bad regulation (good for them) and no regulation (good for them) seem like the only options. The conversation should instead be about which *kinds* of regulations are beneficial.

Expand full comment

Every industry that fell into regulatory capture of course started out with idealists who claimed "this time its different, we'll have good regulation!" while neglecting to consider they were doing nothing different to expect a different result. Nobel laureate economist George Stigler and others explain why human behavior and organizational structures and incentives tend to lead to this. For instance the "experts" tend to come from industry.

Often there is government regulation hiding behind the scenes of something people claim is a private monopoly. People get confused into thinking something is a free market when its in reality heavily distorted by government regulations. There are lots of myths about the early 20th century that are emotionally appealing to those fearful of business, but which fall apart on close examination that people don't bother engaging in. Unfortunately most writings about economics for the general public may be superficially plausible but aren't well researched or reasoned. Its like those that wish to regulate AI but don't grasp it.

Expand full comment

I am very interested in potential of end -to-end-open AI as a path for implementing #AIBillOfRights and other important principles.

There's a unique opportunity to make a difference when so much is in flux.

https://twitter.com/wait_sasha/status/1667312235167854593?t=XB68pPau-nrGqL7W53aN2g&s=19

Expand full comment