…and that’s because much of what I was writing here, I’m now writing over at IT Pro with my wife and co-writer Mary Branscombe.
There’s a sample of the headlines over in the sidebar…
November 30, 2007
August 31, 2006
It’s official – we can now announce the new writing gig I’ve been hinting at.
On a more formal note, we’ll be doing several news stories a week, and a similar number of features a month – so we’ll be looking for plenty of press releases and people to talk to.
Any PRs with relevant clients, please, get in touch – we’re starting to work on September right now!
We’re not dropping any of our other regular writing. We’ll just be busier…
August 25, 2006
Amazon has now added a new layer to its utility computing platform with the Elastic Compute Cloud, EC2. S3 redefined the pricing model for utility storage, and EC2 looks set to do the same for utility and grid compute resources.
You should all give Mary the credit for the pun in the strap…
It also looks as though we’ll have some very interesting writing news to announce next week. Stay tuned to this channel!
August 24, 2006
Amazon is launching a new service – the Elastic Compute Cloud (or EC2 to its friends).
Like S3, its storage service, EC2 is a “cloud” service, treating compute resources as a commodity that can be charged for as a utility. Machine images are used to handle applications – with templates available to ease configuration. Amazon is currently supporting Fedora Core 3 and 4 Linux OSes with a 2.6 kernel, though it says any 2.6 kernel–based distribution should work. Each image is the equivalent of:
a system with a 1.7Ghz Xeon CPU, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth.
S3 will be used for storage. Pricing is good, too, especially when compared to Sun’s $1/CPU/Hour:
- Pay only for what you use.
- $0.10 per instance-hour consumed (or part of an hour consumed).
- $0.20 per GB of data transferred outside of Amazon (i.e., Internet traffic).
- $0.15 per GB-Month of Amazon S3 storage used for your images (charged by Amazon S3).
Worth looking at as a prototyping facility, or as a source of quick compute power when required.
It’s only limited beta to start with, though. So don’t start piling on to it yet!
There’s a FAQ here.
Crossposted to Technology, Books and Other Neat Stuff.
August 22, 2006
Mary and I will be in and around San Francisco, San Jose, Silicon Valley and the Bay Area at the beginning of October.
If you’re a PR or a company representative, get in touch, as we’re looking for technology companies to visit whilst we’re there.
We’re interested in everything from enterprise architecture to desktop applications, with a particular interest in mobile and social technologies, as well as tools for managing service oriented architectures.
Want to give a US client exposure in the UK? Drop us a line!
[This is something in the way of an experiment to see if I can use this blog as a tool for handling pitches and RFIs.]
August 16, 2006
Sun’s Chief Open Source Officer Simon Phipps has announced the next stage of the open sourcing of Java in London this week, adding Java ME to the road map. Open source versions of both Java ME and Java SE should be available by the end of the year.
While there were no actual dates confirmed, Phipps went into more detail on the open source roadmap for Sun’s various software platforms. Describing it as a gradual process, he detailed Sun’s commitment to providing an open source software stack, from OS to Java, and in the future, its middleware.
We also talked about the missing element in many Open Source projects: governance.
While one of the keys to Open Source is the license, another is just how the project is run. And Simon sees one big problem facing many open source projects.
It’s all very well being open source, but with only one person with commit rights (the ability to make changes to the code) to the code base, if the project becomes successful, they’re going to become overwhelmed very quickly. Things get worse when commit rights are concentrated in a single project. A project run that like that (and there are many many of them, including some very high profile ones indeed) is more like Microsoft’s shared source programme than anything else. There have even been cases when experts on a piece of code have left the company that sponsors the project, and have immediately lost any rights to working with the codebase…
The really successful projects, like Linux and Apache, have distributed commit rights, and a range of people from many different organisations adding code. That’s what Phipps wants to do with Sun’s open source projects. Open Solaris is certainly successful, and has spawned several different distributions (including one that mixes Debian with a Solaris kernel), and he hopes to the same with Java.
Cross Posted to Technology, Books and Other Neat Stuff
July 5, 2006
There’s a lot to be said about “Trinity”, their secure virtualisation platform, and “Raiden”, a client model for blade servers – but the really interesting story is “Terrazzo”. This is where AMD opens up its HyperTransport pipeline to third-parties – as well as its socket specifications. So on a multicore, multiprocessor motherboard, you could drop in a physics coprocessor for fast gaming, or (and this is where I think things will get very interesting) a dedicated processor for additonal server functionality.
This is where AMD needs to talk to companies like Azul. Dropping one of Azul’s 48 core VM-specific processors onto a HyperTransport bus alongside a set of Opterons could really speed up your Java applications (with direct access to the system memory) – and get rid of all that nasty non-deterministic garbage collection…
AMD is taking the enterprise server game in a very different direction to Intel. Let’s see if the industry takes them up on it…