A technical note about hosting…

I provide a lot of hosting support for my students. It’s best to learn on real-world systems, not XAMPP or other self-contained simulators. Thus I’ve registered a domain, set up GoogleApps for email and wiki-like services; and configured a linux-based host for the class.

This doesn’t have to cost a lot. The “SmallMan” server was built new for a budget of $325. It consists of: Intel 510DMO (dualcore 1.6Ghz Atom cpu), 4GB RAM, 500GB disk drive, dual-network addon card, an extra fan, an ITX case and P/S, and a DVDROM drive. It hosts three VMs, providing 23 webhosts, two email servers, various other support services… and draws a whopping 25 watts under heavy load.

Unless I look at it I can’t tell it’s running.

Advertisements

Time to move…

It will be moving day soon. Not for me, but for students wanting to take their accomplishments forward.

Since January, I’ve been teaching a course in Web Architecture. In practice this has left students with a number of websites, in various states of completion/construction/disrepair and so on. Most will have a customized WordPress install, and a Drupal 7 system.

For the time being, the course is hosted mostly on my little in-house server. By the end of the calendar year, students will have to move off this server.

But where to go?

There are generally three possibilities: host it yourself, pay for hosting, or  take it down.

Host it yourself works only IF (big IF) you have: 1) requisite knowledge to install and configure a web hosting environment; 2) a computer to do this on; and 3) appropriate rights for hosting from your service provider. While my class teaches the first component, the others are beyond my control.

I think most will opt for paid hosting; or take it down. It’s too bad the college doesn’t provide hosting support for student projects.

Passwords and Accounts

I’m beginning to be overwhelmed.

A few weeks ago, I lost a USB-key (or flash-drive) with a copy of my master Firefox profile on it. The master profile has all the passwords on it. Think about that for a minute. ALL THE PASSWORDS. In one place.

Ouch.

After a rather frantic day changing the passwords on 227 different accounts, and struggling with a new password regimen, it became clear: I need a way to manage passwords. I probably also need better passwords, or at least more of them.

In the process, I also found which sites had rather poor password policies, and I’ve made a list of places to re-assess. In this day and age,  password policies of “all numeric” or “only eight characters” or “upper-lower case only no numbers” are absurd. I’ve already decided to change vendors in some instances, due to absurdist password policies.

I still have to figure how to manage the passwords. There are several commercial solutions, as well as some open-source, but they almost all suffer from one or more drawbacks. I guess I’ll end up making a compromise, somewhere.

The first problem is with the hardware solutions – you have to carry it around with you, it needs batteries, it only stores a small group of passwords, what if I lose it? I don’t think I’m going to use a dedicated hardware unit.

The software solutions, well, I think I’ll have to go with one of them, but for an alternate path, I’m also beginning to use OpenID. I have accounts on several of the providers, but after having poked around a bit, I think I’ll end up using the Google-based provider most often. In order for this to work, of course, you have to have a Google Profile – and thus a new webpage was birthed.

Along the way I’m also going to finally take the plunge into the smartphone pool – StupidPhone™ is starting to wear out, and it’s about time I stepped forward from the trailing edge of technology. Whichever password manager I pick needs to run on an Android-based phone.

Growing tired of Facebook…

I think I’m about to reach the end of the line on Facebook… not totally, I’ll keep a few people on the list, but I’m realizing it is:

1) a colossal waste of time; 2) riddled with bugs and viruses; and 3) not a particularly viable medium for discourse.

This will get updated some over the next few days, but I’m about to trim the facebook “friends” list from its current 145 to perhaps a third of that number. Among other things, facebook is reminding me why I haven’t bothered to return to Burlington NC for well over 20 years.

 

 

Would the Internet exist without US Government sponsorship?

Yet another post based on the muse of Facebook…

I gotta ask, how subsidized is the “internet”? Would this thing be able to operate on a free-market, in your opinion(I would assume yes, as it has massive profits available to it), but, could the start up of the internet, been possible without subsidization? Not sure how clear my question is.  (a Facebook Friend)

My response:

As it is right now, there are no subsidies involved… it’s self-supporting based on domain registration fees and general good-will of the various commercial suppliers involved. To the extent the US Govt is involved at present, it is as a major consumer of bandwidth, and as a content supplier.

Starting out… The Internet (TCP/IP) protocol suite displaced X.25, which was available commercially from the late 1970s (I had an account from May 1978 onwards via Tymnet). X.25 is based on virtual circuits and is closer in conception to telephone switching than to the current Internet.

In X.25 networks, you connected to a single destination, and relied on that destination to provide your content and services. This was the original function of services such as CompuServe, Delphi, Prodigy and AOL. By 1993 the X.25-based services were handling around 20 million subscribers compared to TCP/IP having perhaps 500,000 users. It’s for this reason Windows 95 did not handle TCP/IP very gracefully; there was a good business argument to be made against the whole Internet “fad.”

In ’93 or ’94 the US Govt started to transition out of running the “Internet” – and opened it up to commercial users. Since TCP/IP ran on damn near anything (X.25 required special switches and lots of infrastructure by comparison) and had no messy royalties and such, it began to catch on quite quickly.

On bringing light to the darkness…

Yet another post inspired by Facebook discussion. I think I’m beginning to find my muse…

The web is not quite 20 years old (Dec ’91 was when the concept was published). While for most people the Internet revolves around Internet Exploder or Firefox or Safari, there were other products.

In the beginning, there was Mosaic. And it brought light from the darkness, but it was featureless. It begat Netscape, which had features, but crashed a lot, and eventually was bought by AOL who set about to kill it. Marc A set people free by leading a small band through the wilderness to start Phoenix, but they ran afoul of trademark and thus begat Firefox.

Somewhere along this path, Bill Gate-us of Borg beheld Mosaic, and begat Internet Explorer, which being of parentage foul became the source of much pain and suffering and in derision it is named Exploder.

Thus ends the quick genealogy lesson.


…and if the foregoing largely makes no sense, then here is the deal:

You access the Web by way of specialized software, the “Browser.” It allows you to browse content, in much the same way people [used to?] browse the shelves at the library, looking for something interesting to pull down and read.

NCSA Mosaic, Netscape, Firefox, IE, Safari, etc. are all examples of the “graphical browser.” This is the interface almost everyone uses, and for many people, this is “the Internet.”

It’s only a part of the Internet. There are also text-based browsers; Lynx is the most prevalent of these. Why would anyone use a non-graphical browser? Suppose you’re blind, but still want to make use of the Internet. You don’t need to download the pictures (can’t see them anyway). Or, suppose you have limited bandwidth, but need to get some information. One regular reader of this blog always makes disparaging remarks about the US National Weather Service relying on UPPERCASE TEXT FOR ALL WX MESSAGES – but there are both international treaties as well as good solid engineering reasons for having the all-caps text. [Technical rationale – all-caps can be transmitted as 6-bit code, thus saving 25% of bandwidth; while there is a single source for most WX data there are multiple output streams, at least some of which still use Baudot encoding.]

Back to the story… Browsers convert the user’s simple “woodallrvcc.wordpress.com” to the several lines of commands necessary for the webserver (the other end of the conversation) to find the content the user desires; then the browser interprets the content received and displays it…

It’s important to remember the browser does not represent the entirety of the Internet, and also that browsers are not a one-size-fits-all — except for the moment on so-called “smart phones.”So if you’re looking for that extra edge, try another browser.

Thunderbolt and Light Peak

On February 24 2011 Apple (with an assist from Intel) attempted to change the world. Again.

Fizzy fizzy (fizzle?) – to be expected when you make Lemonade from a lemon.

That’s my quick take on Thunderbolt – it’s an attempt to make lemonade from a lemon.

Here’s the picture: Apple fell behind on peripheral connections. This is the attempt to leapfrog over everyone’s head and come out with something all shiny and new. Apple was first to use Firewire, first to have USB only notebooks, and then they stagnated. They made a couple of updates, adopting USB 2.0, changing to Firewire 800, but they ignored eSATA and USB 3. Their notebooks have always been somewhat crippled by a lack of external ports (I love seeing the big bags of holding carried by serious Apple users which contain the USB hub and cables and external drives and ephemera considered ‘necessary’).

The world marches on, and Apple belatedly realized they needed a new external peripheral bus. Thus Thunderbolt.

Except what they picked isn’t all that shiny, or new, and might even be regarded as a bit of a flop. LightPeak is Intel’s next-generation peripheral bus; based on optical fiber it promised multi-gigabit throughput and tons of interconnectivity. Thunderbolt has the 10GBit/sec throughput… but on copper wire. You have to believe something went a bit wrong between the lab and the showroom.

There’s some buzz generated by the incorporation of DisplayPort technology into Thunderbolt. How this plays out is still up in the air, but it does bring one thing to my mind: DRM. That’s right, along with Intel’s next-generation processors which include DRM on-chip, now the peripheral bus will also have Rights Management. Videographers might want to re-read the fine print on the H.264 licensing agreements… and contemplate. Of interest also is the sole-sourcing for Thunderbolt controllers (Intel) and the de-facto imposition of a royalty on implementation. It’s this last which effectively destroyed Firewire in the marketplace. Apple can be a slow learner at times.

Thunderbolt/LightPeak allows for seven devices, daisy-chained (one after another), with DisplayPort at the end of the chain. Theoretically there is 10W of power for peripheral devices – watch those batteries drain! Thus there is more peripheral power available, but far fewer devices can be attached. Right now that’s not a problem as only LaCie has Thunderbolt product in the channel. Only a handful of peripheral manufacturers have so far climbed onto this bandwagon.

One other thing I see from reading the specifications – Thunderbolt allows direct memory access to main system memory (this system operates peer-to-peer just like Firewire) and thus may well have the security hazards of Firewire as well – do you really trust that projector you attached?