Amused Me This Monday Morning

Two things of a religious nature that amused me this morning:

The Rapture Index – what happens if you take the models used to predict global stock exchange behaviour and apply them to biblical prophecy about “the last days”. It’s funny, right up until you realise that they’re absolutely serious. Pretty site, though.

How To Tell If Your Child Is A Goth (and therefore worshipping Satan and in great danger!) – hilariously bad, scary how the fundamentalist Christians find these things to blame for the world’s evils and to find Satan in. I particularly love the fact that you can tell that you have “strayed from the path of the Lord” by what breakfast cereal you eat. I originally lifted the entry from Faye‘s blog, but as she’s made it “friends only” I can’t link to it from here. Archived copy.

A.I. For Deluded Nutcases

Some goon (sorry: Californian counsellor) has patented Inductive Inference Affective Language Analyzer Simulating Artificial Intelligence (including the Ten Ethical Laws Of Robotics). It’s nothing but unintelligible babble, interspersed by (inaccurate) references to artificial intelligence theory. The author (who also writes a book on family values with a distinct evangelic slant, from which most of the text of the patent seems to be taken) appears to know nothing about A.I. or computer science. In addition, I find his suggestion that ‘wooly’ and ‘vague’ rules and ‘commandments’ are sensible choices for A.I. safeguards –

While a meaningful future artificial intelligence may be more than capable of understanding rules set out in a way that a human might like to express it – indeed, for some machine intelligences (artificial or not) this capacity to understand human speech and expressions could be a very useful feature – this is not the level at which safeguards should be implemented.

While I appreciate the need for ‘safeguards’ (the need is that humans would not feel safe without them, as even early machine intelligences – having been built for a specific purpose – will be in many ways superior to their human creators and therefore be perceived as a threat to them), I do not feel that a safeguard which depends on the machine already being fully functional would be even remotely effective. Instead, such safeguards should be implemented at a far lower and fundamental level.

For an example of this, think of the safety procedures that are built into modern aircraft. An aeroplane is a sophisticated and powerful piece of machinery with some carefully-designed artificial intelligence algorithms pre-programmed into it, such as the autopilot and autoland features, the collision avoidance system, and the fuel regulators. Other, less sophisticated decision-making programs include the air pressure regulators and the turbulence indicators.

If the cabin pressure drops, an automatic system causes oxygen masks to drop from the overhead compartment. But this is not the only way to cause this to happen – the pilot also has a button for this purpose. On many ‘planes, in the event of a wing fire, the corresponding engine will be switched off – but this decision can be overridden by a human operator. These systems are all exhibiting high-level decision-making behaviour: rules programmed in to the existing systems. But these are, in the end, a second level safeguard to the low-level decision-making that prompts the pilot to press the button that drops the masks or keeps the engine on. These overrides are the most fundamental and must crucial safeguards in a modern aircraft: the means to physically cause or prevent the behaviour of the A.I..

Let’s go back to our ‘robots’ – imagine a future not unlike that expressed in films like Blade Runner or I, Robot, in which humanoid robotic servants assist humans with many menial tasks. Suppose, for whatever reason (malice, malfunction, or whatever), a robot attacks a human – the first level of safeguard (and the only one suggested by both films and by the author of the “Ten Ethical Laws“) would be that the human could demand that the robot desist. This would probably be a voice command: “Stop!”. But of course, this is like the aeroplane that ‘decides’ to turn off a burning engine – we already know that something has ‘gone wrong’ in the AI unit: the same machine that has to process the speech, ‘stop’. How do we know that this will be correctly understood, particularly if we already know that there has been a malfunction? If the command fails to work, the human’s only likely chance for survival would be to initialise the second, low-level safeguard – probably a reset switch or “big red button”.

You see: the rules that the author proposes are unsubstantial, vauge, and open to misinterpretation – just like the human’s cry for the robot to stop, above. The safeguards he proposes are no more effective than asking humans to be nice to one another is to preventing crime.

Whether or not it is ethical to give intelligent entities ‘off’ buttons is, of course, another question entirely.

Additional: On further reading, it looks as if the author of the document recently saw “I, Robot” and decided that his own neo-Christian viewpoint could be applied to artificial intelligences: which, of course, it could, but there is no reason to believe that it would be any more effective on any useful artificial intelligence than it would be on any useful ‘real’ intelligence.

God [humour]

[this post was damaged during a server failure on Sunday 11th July 2004, and it has not been possible to recover it; a partial recovery was made on 13 October 2018]

[missing picture]

Well; I’m glad he cleared that up for us.

Christians Should Be Banned From The Internet

[this post was lost during a server failure on Sunday 11th July 2004; it was partially recovered on 21st March 2012]

If you’re going to spend (at an absolute minimum – and probably closer to four times the amount) $350 on a series of banner advertisements promoting your service, to be displayed inside a popular ad-sponsored piece of software, you’ll check your spelling, right? Right? Look at this:

[this image has been lost]

Sometimes I really do feel that Christians should be banned from the internet. They should certainly be disallowed from writing web pages – other than the Christians, I’ve never seen a group of people who have – within their own group – broken every single rule of good web site design. Well… except if you consider GeoCities-users a group of their own.

As if this page, which scrolls on and on, haslarge numbers of images linked from other sites, and using a (badly) tiled background image, isn’t bad enough, I’ve seen:

  • This GeoCities monstrosity, with a stupid amount of animated GIFs, annoying applets, and platform-dependent code (including an embedded… [the rest of this post has been lost]

Orange Gives Me 80p For No Apparent Reason

Today, Orange sent me a text message apologising for charging me for two picture messages earlier this year, and have apparently credited me 80p as a gesture of compensation. The Register reports that this has happened to others, too, but I can’t help but feel that Orange’s mistake is even larger than they thought it was – I never received these picture messages in the first place!

I’m tempted to call them and complain that I didn’t ever receive the two picture messages for which I’m having my money refunded, but as I’m not even sure that I was charged for them, either (can’t see it on my bill), I’ll probably lose me free 80p if I do. Decisions, decisions.

In other news, comment-heavy discussion on the difference between Christianity and Islam on Alec’s LiveJournal [link updated to use Web Archive, which still holds a copy]. Take a look.

Avatar Diary

Went to church with Alecia and Richard. There’s a weird experience for me. Tasted the new flavour of Juicefuls from Brewsters while I was in town – Strawberry flavour. They’re just as addictive as all the other flavours. Though I had intended to use the remainder of the day to finish tidying my room (the end of a mammoth task) and doing homework which is due later this week, I was instead distracted and played on Civilization II, watched TV, and other such mind-expanding activities. Went to bed in the wee small hours, after updating the Castle with my latest Åvatar Diary accounts.

Avatar Diary

Maths in the morning. How is it that the first lesson in a new module always makes so much more sense than any other one? Went home for lunch in my three-hour break, returning for Psychology. Watched a video on “Eyewitness Testimony”. We’ve seen the same one three times now. Alecia took the second half of the lesson to preach to us – usally the kind of activity for Monday’s Psychology lessons – and persuaded Richard and me to come with her to church on Sunday. Promised I’d go, and I think I’ve persuaded Rik too, as well (even though he let her down on the Carol Service). “You can’t knock it ’til you’ve tried it,” I told him, with my usual democratic tact.

Couldn’t sleep again tonight – same as last night – watched TV until about 4:30am…