Springe zum Inhalt

I guess in the lights of the well hailed OnRez Viewer Lindenlab was tempted to build a renovated version of the viewer on their own. The result is the Dazzle first look viewer.

Dazzle looks quite blue, when you use it, but aside that it's still the old Lindenlab viewer, all the same menus and such, nothing new in it. Yet.

That's the difference with the OnRez Viewer. When it became public available, it had some innovations in it like the in-built webbrowser, the renovated search and others. OnRez itself was innovative, Dazzle on its own is just a new skin for the client, nothing more.

But what the client really needs is a rearrangement and rethinking of some menus, interfaces and much more, which hasn't happened in Dazzle - yet. So in that point Dazzle is simple disappointing at the moment.

There are still many fields in which Second Life gets new development,
a good thing, if you ask me. But since those fields are quite diverse,
there are now so many first look and beta viewers around, that it get's
troublesome to test them all out. Don't believe me? Well, for Windows,
there are those viewers beside the official one around:

  • the release candidate viewer (version 1.19.0.2),
  • the Windlight first look viewer,
  • the Dazzle first look viewer and finally
  • the beta test grid viewer.

Each viewer is spurting another feature the other one has not or is going to make it sometime into the official line of viewers. So we got now a quite fragmented field of beta viewers. Wouldn't it just be better to roll up all those beta features into one beta line and hand this one out to people feeling adventurous? I strongly believe so.

All the big databases of Second Life are using MySQL. Lindenlab runs them on the premise: databases are ordinary, better run 50 of them than just to have a big one. Choosing and running a database engine is one thing, the other how you install it.

A big matter of choice and on the impact on the whole data system is of course the operating system - Lindenlab runs Linux - and of the underlying file system. According to the SL history wiki all the database servers of Lindenlabs use ext3 as default filesystem, after they uses ReiserFS 3 for a while and evaluated XFS. Ext3 is really a bad choice if you need the best performance your hardware can give.

Well, why that? There are some reasons. There's this interesting IRC log of MySQL employee Kristian Köhntopp. Köhntopp is quite well known for his articles about computer topics and such. This IRC log is about which file system you should choose for a database server in general, but you can take his views of course too on the databases empowering Second Life.

Well, so what's wrong with ext3 as filesystem for a database server according to Mr. Köhntopp and what's ok about it? Several things:

  • the amount of files in a directory doesn't really matter anymore with ext3 compared to filesystems like XFS when you've created the ext3-filesystem with the option dir_index.
  • A big disadvantage is that ext3 is flushing its log quite irregularly. Meaning: the execution times of certain queries in MySQL can differ quite a lot.
  • Another disadvantage is that ext3 does not perform very well if many concurrent clients are connecting read/write, in numbers from 10-50. If only running a single thread, ext3 is mostly expected to be faster than XFS. But when running with many concurrent clients - and that's what we got sure in Second Life - XFS beats ext3 hands down.
  • XFS has in contrast to ext3 way much better flush times, they are more regular, and it's much better at preventing the fragmentation of files.
  • Ext3 is making "block marmelade", meaning inter chained files, if some files in the same directory are growing at the same time; XFS is good at preventing such a thing.

In conclusion Köhntopp states that ext2 (which is the base of ext3) is depending on the state of art around 1984. XFS on the contrary has been build on papers around 1994, meaning it's younger and having a bigger code base. This means, that XFS might have more errors still than ext3 but on features that ext3 doesn't have.

Oh, and by the way, according to this blog entry from 2005 about the switch back to ext3 from Mark Linden he hasn't really understand what a journaling file system is for. If you take a look at the 2nd mail on this link, you see what Theodore T'so means. But keeping the data intact is not for what the journaling file system has been made. It has been made to keep the filesystem itself intact.

If you want to have an intact database after a crash, use an ACID-compliant one, like the InnoDB-Engine of MySQL.

So what's to say in conclusion? If Lindenlab is still using only ext3 as filesystem for all of their database servers and those servers normally have many concurrent read/write clients around 10-50 or more, they're denying themself from the speed a decent filesystem could give them and really, really should consider moving to another filesystem like XFS. This would be also one good explanation why e.g. the asset server is so damn slow - always, because the filesystem is slow.

Havok 4 (now belongs to Intel), the recent version of the physics engine Second Life uses, is going into beta test. This is something many people have been waiting for one or two years.

This is first going to be a change under the hood; Lindenlab expects a more reliable SL experience out of it. So you should not expect big changes in the beginning.

Later this could lead perhaps to better prims per vehicle, since many ppl awaited this feature for long. We are going to see.

1

Prokofy Neva and the Second Life Insider are both claiming that the problems we've been experiencing with the grid for the last two days are not the result of internal, technical problems but instead of a concerted attack of griefers at the grid.

The exploit used has been reported already at 12th February 2006 and still remains unfixed. They are claiming it is hard to fix, but hey, after more than a year and it being abused to attack the grid, it is time now to do something against it! Neva claims, that the purpose of those attacks is to make Second Life unusable for good use like the Relay for Life event last weekend. Hm, some logic in it, could be, for sure if it was attack this event could have been a worthwhile target for griefers.

What are the conclusions? We are never going to know, if those problems were a gridwide attack of griefers or not, until the Lindens are going to tell us. Some things are clearly pointing into that direction. But if it was a griefer attack, the Lindens should tell so and don't leave us in the dark about it.

Ajaxlife, the web based SL chat client, has gone opensource and is available here. While it's serving its purpose and working, I dislike the fact that it is being implemented in Mono, because due to libsecondlife. If it would be running under another programming language, there would be no need for a sandbox and it would take much less memory on a server, but then again we are all free to do better. 🙂 

There's now a 3rd party web based SL client called AjexLife in development and available. So far it has more or less the same feature set as Slink, meaning it's good for logging into the grid, chatting, sending instant messages, getting notifications, viewing the map and teleporting around. Of course - no graphics. The main focus is on for the main client underpowered machines and communicating so far.

Since this works with AJAX, it should be possible that it even works through most company firewalls - I guess the server makes the connection to the grid, not the webbrowser per se. I wonder under which license this work is going to be put under...

One of the most annoying and persistant bugs at the moment is the broken friends list. This is a very critical feature since many rely on it working properly. Here is the bug for this in JIRA, the open issue tracker of LL, quite an interesting read about it. It goes back into March, quite a bad thing, if you ask me...

In the Avastar #14 there's a comment of Gwyneth Llewelyn about the scaling of Second Life on page 6. She's claming that one possible move to relieve the grid of stress would be to store the texture data, which seems to go into the hundred of Terabytes (wonder where she got this number from, I thought all data is around 34 terabytes at the moment according to this article, so I stuck with this number, so hundred of terabytes is wrong on a great scale), from outside the grid, perhaps even allowing users to store them on their own servers. While I don't see the textures stored on the own servers, because of possible protests, she continues.

She's stating that if that move occurs the 2000 servers of SL would be able to hold 20 million simultaneous users and this could be achieved within a month with the work of one developer!

Personally, I doubt that - really. I know they're going to switch from their own data protocol to HTTP somewhat this year. Serving a texture is a quite simple task - provided it's stored in a simple filesystem and not in a database. Storing textures - binary data - in a database system is always very dumb; the clever way to do it is to store the filesystem name in the database only, since the database is much, much slower to the task and adds far more complicity to it.

And, of course, the textures and so on are really not stored on one hard disk, but I guess on a logical volume or clustered filesystem. There are enough technics around to do it under Linux.

I personally think that serving the textures is not what strains the servers, because they're loaded into the cache and that's it, then. It's more computing the viewing range, making database querys about the prims (if stored in database) with their textures, it is interpreting scripts, computing the locations of all the avatars, doing the physics stuff and so on. And that's why I think even if the textures are located somewhere else that the technic now available on the main grid is not up to the task to handle 20 million users at the same time

Now that's something out of the IBM article, but worth an own entry: the open available JPEG-2000 library OpenJPEG got faster! This means you don't need the proprietary libraries from Kakadu anymore to get a good open source client, if you're living on the bleeding edge and this is making its way in the main tree of OpenJPEG after some time, I guess. Yeah!