…and that’s because much of what I was writing here, I’m now writing over at IT Pro with my wife and co-writer Mary Branscombe.

You can find our IT Pro blog here (and its feed here)

There’s a sample of the headlines over in the sidebar…

It’s official – we can now announce the new writing gig I’ve been hinting at.

From the start of September Mary Branscombe and I will be looking after the Server and Networking sections of IT Pro. Not only that, we’ll also be running a joint blog on the IT Pro site.

On a more formal note, we’ll be doing several news stories a week, and a similar number of features a month – so we’ll be looking for plenty of press releases and people to talk to.

Any PRs with relevant clients, please, get in touch – we’re starting to work on September right now!

We’re not dropping any of our other regular writing. We’ll just be busier…

tags: , , ,

I wrote up my thoughts on Amazon’s EC2 for IT Pro.

Amazon has now added a new layer to its utility computing platform with the Elastic Compute Cloud, EC2. S3 redefined the pricing model for utility storage, and EC2 looks set to do the same for utility and grid compute resources.

You should all give Mary the credit for the pun in the strap…

It also looks as though we’ll have some very interesting writing news to announce next week. Stay tuned to this channel!

Amazon is launching a new service – the Elastic Compute Cloud (or EC2 to its friends).

Like S3, its storage service, EC2 is a “cloud” service, treating compute resources as a commodity that can be charged for as a utility. Machine images are used to handle applications – with templates available to ease configuration. Amazon is currently supporting Fedora Core 3 and 4 Linux OSes with a 2.6 kernel, though it says any 2.6 kernel–based distribution should work. Each image is the equivalent of:

a system with a 1.7Ghz Xeon CPU, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth.

S3 will be used for storage. Pricing is good, too, especially when compared to Sun’s $1/CPU/Hour:

  • Pay only for what you use.
  • $0.10 per instance-hour consumed (or part of an hour consumed).
  • $0.20 per GB of data transferred outside of Amazon (i.e., Internet traffic).
  • $0.15 per GB-Month of Amazon S3 storage used for your images (charged by Amazon S3).

Worth looking at as a prototyping facility, or as a source of quick compute power when required.

It’s only limited beta to start with, though. So don’t start piling on to it yet!

There’s a FAQ here.

Crossposted to Technology, Books and Other Neat Stuff.

Mary and I will be in and around San Francisco, San Jose, Silicon Valley and the Bay Area at the beginning of October.

If you’re a PR or a company representative, get in touch, as we’re looking for technology companies to visit whilst we’re there.

We’re interested in everything from enterprise architecture to desktop applications, with a particular interest in mobile and social technologies, as well as tools for managing service oriented architectures.

Want to give a US client exposure in the UK? Drop us a line!

[This is something in the way of an experiment to see if I can use this blog as a tool for handling pitches and RFIs.]

I had a very interesting conversation yesterday with Simon Phipps, Sun’s Chief Open Source Officer. You can read some of it here at IT Pro .

Sun’s Chief Open Source Officer Simon Phipps has announced the next stage of the open sourcing of Java in London this week, adding Java ME to the road map. Open source versions of both Java ME and Java SE should be available by the end of the year.

While there were no actual dates confirmed, Phipps went into more detail on the open source roadmap for Sun’s various software platforms. Describing it as a gradual process, he detailed Sun’s commitment to providing an open source software stack, from OS to Java, and in the future, its middleware.

We also talked about the missing element in many Open Source projects: governance.

While one of the keys to Open Source is the license, another is just how the project is run. And Simon sees one big problem facing many open source projects.

It’s all very well being open source, but with only one person with commit rights (the ability to make changes to the code) to the code base, if the project becomes successful, they’re going to become overwhelmed very quickly. Things get worse when commit rights are concentrated in a single project. A project run that like that (and there are many many of them, including some very high profile ones indeed) is more like Microsoft’s shared source programme than anything else. There have even been cases when experts on a piece of code have left the company that sponsors the project, and have immediately lost any rights to working with the codebase…

The really successful projects, like Linux and Apache, have distributed commit rights, and a range of people from many different organisations adding code. That’s what Phipps wants to do with Sun’s open source projects. Open Solaris is certainly successful, and has spawned several different distributions (including one that mixes Debian with a Solaris kernel), and he hopes to the same with Java.

Cross Posted to Technology, Books and Other Neat Stuff

tags: , , , ,

An interesting conversation yesterday with folks from AMD, on what’s going to be happening with the next generation or two of their virtualisation technologies in Opteron.

There’s a lot to be said about “Trinity”, their secure virtualisation platform, and “Raiden”, a client model for blade servers – but the really interesting story is “Terrazzo”. This is where AMD opens up its HyperTransport pipeline to third-parties – as well as its socket specifications. So on a multicore, multiprocessor motherboard, you could drop in a physics coprocessor for fast gaming, or (and this is where I think things will get very interesting) a dedicated processor for additonal server functionality.

This is where AMD needs to talk to companies like Azul. Dropping one of Azul’s 48 core VM-specific processors onto a HyperTransport bus alongside a set of Opterons could really speed up your Java applications (with direct access to the system memory) – and get rid of all that nasty non-deterministic garbage collection…

AMD is taking the enterprise server game in a very different direction to Intel. Let’s see if the industry takes them up on it…

Technorati Tags: , , ,

Eweek has an interesting piece on a proof of concept hyperjacking rootkit that’s about to do the rounds of the security shows…

“The idea behind Blue Pill is simple: your operating system swallows
the Blue Pill and it awakes inside the Matrix controlled by the ultra
thin Blue Pill hypervisor. This all happens on-the-fly (i.e. without
restarting the system) and there is no performance penalty and all the
devices,” she explained.

Rutkowska stressed that the Blue Pill technology does not rely on
any bug of the underlying operating system. “I have implemented a
working prototype for Vista x64, but I see no reasons why it should not
be possible to port it to other operating systems, like Linux or BSD
which can be run on x64 platform,” she added.

Interesting times…

While I may be buried in the ballroom of a hotel in Brooklyn, well into my second day of non-stop PowerPoint on the next Microsoft Office (I'm assuming that there are five boroughs out there somewhere…), here's a little SOA associated piece from El Reg for your delectation:

The recent 4.1 release of BlackBerry Enterprise Server (BES) from Research in Motion (RIM) opens the door to a new set of mobile development tools and technologies. BlackBerries aren't just for email – they're also a secure pipe to and from your network. With the latest build of RIM's MDS (Mobile Data Services) platform bundled with BES 4.1, BlackBerries are able to take advantage of any web services in your, and your partners', networks, and can quickly become a secure input device. If you've got BES 4.1 running on your network, turning your Blackberry services on is nearly as easy as downloading RIM's MDS Studio application, although it's a hefty download at well over 230MB. You'll also need to pull down the documentation and sample applications at the same time. The Studio includes a BlackBerry simulator, so you can test applications as you build them.

Read on here. And a big hand to the folk at RIM, who were able to get me the code despite their download server having a serious meltdown, so I was able to deliver my copy literally as the taxi driver who was taking us to Heathrow rang the doorbell…

I spent last Friday morning braving the delights of Highway 17 over the Santa Cruz mountains in the rain at Azul Systems' offices next door to Google in Mountain View, learning lots of interesting stuff about their Vega processor and their network attached processing tools, including their "pauseless" Java garbage collection.

You can read about some of my morning at The Register:

Adding storage to a network is straightforward; adding processing power tends to involve a lot more complexity. This is something Azul Systems aims to change. Following the recent announcement of its second generation Vega processor, is today’s news that BT will be using the company's processing appliances to handle both its existing web applications, as well as providing the foundation for a utility computing farm – part of BT’s 21st Century Network.

The Azul platform is more than just a box you connect to your network, which replaces software virtual machines. It’s also a set of tools for managing application performance and handling how you bill the rest of the business for CPU usage. Mainframe administrators will be familiar with these techniques, but they’re still new to the arrays of application servers that now run many of our businesses. Being able to bill for actual CPU and memory usage is a key part of any utility computing platform – whether it’s Sun’s $1 per CPU per hour or an IT department billing the rest of the business for application operations.

They've got quite an impressive server room too, especially when you realise that each of those boxes has 384 cores – so that's the equivalent of 9600 CPUs in this rack alone:

Not bad – and what's more important, not too power hungry.

A series of photographs of UMPC prototypes and concept devices, taken at Intel’s IDF Spring event in San Francisco.

1. UMPC at work, 2. UMPC at rest, 3. Concept UMPC, 4. Concept UMPC, 5. Concept UMPC, 6. The Real UMPC, 7. The Real UMPC

A fascinating device – qand one that will make an excellent mobile client for service architectures.

Most UI tools come with libraries of reference controls. Coding up a drop down menu can be a more of a problem than you first think – so approaches like Flash’s Halo and the control libraries shipping with Microsoft’s Expression Interactive Designer are considerable time savers…

Yahoo! has given AJAX developers the same sort of bootstrap, with its User Interface Library.

The Yahoo! User Interface Library is a set of utilities and controls, written in JavaScript, for building richly interactive web applications using techniques such as DOM scripting, HTML and AJAX. The UI Library Utilities facilitate the implementation of rich client-side features by enhancing and normalizing the developer’s interface to important elements of the browser infrastructure (such as events, in-page HTTP requests and the DOM). The Yahoo UI Library Controls produce visual, interactive user interface elements on the page with just a few lines of code and an included CSS file. All the components in the Yahoo! User Interface Library have been released as open source under a BSD license and are free for all uses.

Components include Calendar controls, sliders and tree views, as well as utilities for handling animations and working with the DHTML document object model more effectively. There’s an associated library of design patterns as well.

Looking good, and hopefully making it easier to deliver the type of web-based UI that works well with SOA applications.

Technorati Tags: , , ,

I’ve been doing some writing for the new developer section of the Register – looking at tools that could help businesses deliver better SOA implementations.

First, a look at Microsoft’s next generation UI development technology Expression Interactive Designer.

It’s been a long time coming. First rumoured at the 2003 PDC (Microsoft Professional Developers Conference), Microsoft’s Sparkle has finally made it part way out the door.More than two years after the original whispers of a Microsoft competitor to Flash, Expression Interactive Designer has arrived. Now you can finally start building all those innovative Windows Vista applications Microsoft has been hoping for.

And secondly, a look at how Salesforce.com is delivering a platform that can be used as a standalone application, a service host, or a service in its own right (all at the same time).

If Web 2.0 mashups are the future of the internet , what will the enterprise application look like? The folk at Salesforce.com think they have the answer, in the shape of the winter 06 release of their web application platform – and the introduction of a web service and application directory, the AppExchange

This is the new home for this blog.

I’ll be updating directories and feeds shortly.

I’m considering moving A New It World here from Blogspot.

So I’m testing out my usual blogging tools. First Performancing.

I’ve realised I’ve mentioned the idea of the hypervisor wars without explaining what I mean by it.

The underlying virtualisation technologies used in Intel’s VT and AMD’s Pacifica curently only allow a single VM Manager to run. This means that the VMM (the hypervisor) installed has an incredible amount of power – it controls what runs and how it runs. Install yours first, and the machine is yours – especially if you lock your hypervisor into TPM or similar security mechanisms.

So what would the hypervisor wars mean? Firstly an end to the open systems model that’s been at the heart of enterprise IT for the last 25 years.

If Microsoft and VMware fell out, VMware could reduce the priority of Windows partitions. Other hypervisors might have licensing conditions that make it impossible to run non-free OSes as clients.

You could end up with a situation where each OS installation would attempt to insinuate its own hypervisor onto the system partition. Security partition developers may find that they are only able to code for one set of hypervisor APIs – locking end users into a closed platform.

The end state?

Co-opetition breaks down, the industry becomes enclaves built around hypervisor impementations, and the end user finds that they’re unable to benefit from the possibilities of an open hypervisor architecture.

Can we avoid the hypervisor wars? Optimistically I think we can. There are pre-requisites. We need an agreed hypervisor integration architecture, and we need it quickly. Let VMM developers compete on ease of operation and management, not on who controls your PC.

Technorati Tags:

One thing to note about the new Apple Intel machines is that the Yonah chipset supports VT.

With Apple saying that they’ll let Windows run on their hardware, the question is – will they let a third-party hypervisor run? I suspect not – especially if they are using TPM in secure startup mode. Of course, they’ll first need to enable VT in whatever BIOS they’re using…

So will Apple produce its own hypervisor, or will it badge a third-party tool? My personal suspicion is that Apple doesn’t have the skills to write it’s own hypervisor (there are only a limited number of people with the deep combination of hardware internals and OS knowledge required, and they’re mainly at Microsoft and VMware) that they’ll announce a partnership with VMware at the WWDC. Unless Apple’s been hiring the Xen dev team on the sly…

Apple will quickly need to gain the high ground in managing virtualisation on their platform – as they’ll need to maintain contol of OS X running as a VM. Otherwise, will Apple be the first casualty of the hypervisor wars?

Technorati Tags: , , ,

Adobe’s new Lightroom is, as they say, the bee’s knees.

Fast, responsive and ideal for working with RAW images, it takes the best of CameraRAW and Adobe Bridge and turns them into a one stop shop for basic image manipulation and comparison. Best thought of as a digital lightbox, its adaptive UI makes it easy to hide the elements you don’t need and just concentrate on the images. An image workflow tool, it helps you manage how you work with images – and how you capture them.

Lightroom Beta lets you view, zoom in, and compare photographs quickly and easily. Precise, photography-specific adjustments allow you to fine tune your images while maintaining the highest level of image quality from capture through output. And best of all, it runs on most commonly used computers, even notebook computers used on location. Initially available as a beta for Macintosh, Lightroom will later support both the Windows and Macintosh platforms.

Which means it runs quite happily on my aging G4 PowerBook (unlike the G5 optimised Aperture)

That’s not say that Lightroom is competition for Aperture.

This is more a first look at how Adobe is rethinking what people are doing with the Photoshop toolset, and putting together the beginnings of a script-controlled service framework for its next generation of imaging applications. It’s a model that fits in nicely with a conversation I had recently with Adobe’s CEO Bruce Chizen (which should be in the next issue of PC Plus), where we talked about Adobe’s strategic direction after the Macromedia acquisition. I’ll leave the conversation to the article – but one thing, I think Adobe are one of the companies that bear watching over the next 3 to 5 years.

(I’m glad I can talk about it now – I saw it in December, and was very impressed at the time – unfortunately I’d had to sign an NDA.)

Betanews notes that there won’t be a Windows version until Vista hits the market. I’m not surprised. I strongly suspect that Microsoft is working with Adobe to make Lightroom one of the apps that will be demoed at the Vista launch. The UI of the version that Adobe demoed back in December would work very well on WinFX – it’s ideal for XAML. Microsoft has had Adobe on stage showing proof-of-concept XAML applications in the past, so having it showing shipping code at the launch would make a lot of sense…

Cross posted to Technology, Books and Other Neat Stuff

Technorati Tags: ,

Here’s a useful post from the always interesting Scott Hanselman, linking to hints and tips on how to use VMs more effectively.

There’s a number of generally recommended tips if you’re running a VM, either in VMWare or VirtualPC, the most important one being: run it on a hard drive spindle that is different than your system disk .

It’s good advice. I’ll be moving my set of VMs to a seperate SATA drive on my main PC. However, sticking them in a fast USB 2.0 drive looks to be a sensible approach as well.

An interesting thought occurs – will we see hardware designed for hypervisors and hardware virtualisation coming with many hard disks? Or will we see a caching layer used, passing operating systems into partitioned cache RAM?

Technorati Tags: ,

It’s a truism of the service world that open APIs mean more developers working with your public services.

Google is a good example of this, and it’s doing it again by opening up its talk service with an interesting set of functions as described on TechCrunch .

Libjingle looks very interesting (and probably something for me to think about with my Server Management messaging editor hat on). Quickly looking at Google’s announcement we see a collection of tools that could make it a lot easier to build collaboration applications:

We are releasing this source code as part of our ongoing commitment to promoting consumer choice and interoperability in Internet-based real-time-communications. The Google source code is made available under a Berkeley-style license, which means you are free to incorporate it into commercial and non-commercial software and distribute it.

In addition to enabling interoperability with Google Talk, there are several general purpose components in the library such as the P2P stack which can be used to build a variety of communication and collaboration applications. We are eager to see the many innovative applications the community will build with this technology.

Below is a summary of the individual components of the library. You can use any or all of these components.

  • base – low-level portable utility functions.
  • p2p – The p2p stack, including base p2p functionality and client hooks into XMPP.
  • session – Phone call signaling.
  • third_party – Non-Google components required for some functionality.
  • xmllite – XML parser.
  • xmpp – XMPP engine.

Looks interesting.

The related Google Talkabout blog has just gone on to my blogroll…