Wednesday, March 8, 2023

AI is evil

I am completely opposed to the expansive use of 'neural network' technologies.

AI is 'solving' a problem without actually solving it.  The meaning of the system is incomprehensible.  For that reason, in some unfortunate circumstance it will make the opposite of the correct decision.  According to Murphy's Law, this will be at the worst possible time, causing the world to blow up or whatever (sadly an outcome that could also be achieved with human 'intelligence' too).

There might be some areas where it useful.  But I'd rather not guess, because even such things as character recognition, as part of a much bigger process the rest of which was perfectly good, would trigger that Murphy's Law clause about 'The Worst Possible Time.'

So I've continued to disparage things ever since I first learned of them in 1983, only more and more as my early beliefs have proven true, with the failures in 'self-driving cars,' for example.

Neural networks are nothing like real biological neural systems, which have complex underlying physical/mechanical structures, influenced by genes and environment in addition to continuous 'training'.


I believe in solving problems the old fashioned way, by understanding them and developing known deterministic algorithms for them.

My longtime speciality (and now) is the combination of procedural algorithms with pseudo random numbers (which are themselves mostly products of procedural algorithms, combined with a time related chance element (such as the precise time of day)--the so called 'seed.'

I quickly saw some mappings.

Start with a problem that has a finite deterministic solution.  Perhaps not all problems are like that (but most could be approximated with that...in the end case exactly what a neural network does, but using weights instead of logical elements) but it's easy to think about.  How about the quadratic formula

(-b +/- sqrt (b^2 - 2*a*c) / (2 * a)

This yields 2 answers, or none, but we'll simplify that as 'the highest solution if there is one.'

Now assume we have training that neural network to solve this problem.  At any point until the solution is perfectly locked in (if there is one!) the behavior of the neural network could be thought of as two components:

<the correct answer> + <nn-approximation-noise>

This noise has a very peculiar property...it's non-gaussian because of the possibility of extreme outlying results that may occur with very low probability.

Worst possible time.



No comments:

Post a Comment