How I browse the web

Previously on this blog, I wrote about why I routinely block execution of JavaScript. I think it opened some eyes. In this post I’ll look at the precise software and settings I use for web browsing.

During a typical day, I use three browsers. Internet Exploder I use for the handful of sites which are crippled (apparently by intent) and with which I absolutely must interact. In practice this means those who’ve lost their minds deployed Microsoft Exchange. Google Chrome I use for Google Maps and some other sites… if they ever fix the sandbox I’d use it far more often.

Most browsing is done in Firefox. Today I’m running Firefox 10.0.2. I have several extensions installed – principally NoScript, AdBlock Plus, Fireshot, TinEye and HTML Validator. Only the first two are security-oriented.

NoScript is a JavaScript blocker. I use it in the most restrictive form; it only allows JavaScript to execute if I’ve approved it. Right now I’m allowing scripts from wordpress.com (hosting site), wp.com (hosting site), gravatar.com (icons and avatars); and I’m forbidding quantserve.com (advertising metrics). noscript example

My “whitelist” runs to about 1100 entries; these are all JavaScript sources I’ve come to trust. Everyone else is in the temporary list.

Yes, it’s annoying to have to whitelist everything. There are a few sites where I can’t come up with a good mix, and thus for vimeo and wimp (video sites) I go use Chrome. This is certainly not for everyone… but doing things this way allows me a great peace of mind in clicking and exploring.

AdBlock Plus is, as the name implies, an advertising blocker. I run it in full-blocking mode – by default it operates in “nice” mode (or something like that) to allow “some” ads.

Dear website operators – I will pay subscription fees. I will not sit through interminable ads – nor do I enjoy having ads which carry along malware infections as part of the “animation” scripting. It’s always a shock to see just how many ads play on some sites… and the lack of care with which so many companies use ads (aside – if you’re a car repair emporium is it really wise to run advertising for brands of tires you don’t sell – and ads which go to your competitor when clicked?).

The other tools mentioned above… Fireshot is used for screen grabs; HTML Validator looks for problems in HTML (useful when testing the website you just created); TinEye searches for the source of photos.

Here are some more hints for safe browsing:

1) I don’t do games. I block every new Facebook game which comes along; the only online game I’ve played in many a year was Angry Birds for about 5 minutes via Chrome. That was enough.

2) I don’t download new software to be able to see the wonderful video-of-the-day. If it won’t play on Firefox I evaluate the source; and perhaps play it in Chrome. But first I check NoScript to see where the playback scripts come from.

3) If I’ve never heard of the site before, I open a window for Google search and enter the name of the site and see if Google thinks it’s ok. The StopBadware gang is quite adept at turning over rocks.

Why I block Javascript.

This subject surfaces from time to time, especially when I’m conversing with the bleeding-edge web design community. “You do WHAT?” followed by a lot of strange looks and laughter is the typical reaction. Then I’m told all about how JavaScript has been “modernized” and “browsers are sandboxed” and other nice things.

I run a variety of browsers; the current desktop has Firefox (with NoScript); Chrome; IE 4; IE 6; IE 8; and Lynx. Most of the time I browse with Firefox/NoScript. Yep, it slows me down, and there’s the minor annoyance of having to set temporary JavaScript execution privileges. This post will attempt to explain why I do things the way I do. Standard disclaimers apply.

First two-word explanation: Zeus Trojan.

The Zeus Trojan is a password-stealer which is usually deployed via JavaScript malware which was introduced to the victim by way of an infected website. As JavaScript has “matured” it also allows for much-improved obfuscation and cross-linking and all sorts of nice ways to operate an attack vector dynamically (to the point where most Zeus variants check location data and refuse to infect systems in certain countries).

For US-based small business (and local government) there is no protective cap on money stolen via identity fraud – and this is the standard use of Zeus. Once the credentials are acquired the thieves can empty a bank account in a matter of hours – and there is no legal recourse against the bank. The money is gone; the victim is not going to get it back.

A part of my professional practice deals with security – no, I’m not going to enter a forum with all the scripts executing. I only look foolish.

…and as I’m writing this post, in over the transom flies this notice – Google has awarded $60,000 as a prize in the Pwnium competition, for a method to overcome Chrome’s “sandbox” feature and run code on a fully-patched Window 7 system. All that is necessary is for someone to browse to an infected website – viewing the page is sufficient to load and execute the payload. A little bit of JavaScript acts as an enabler – there’s no need to bother with an exploit attempt if the browser is something else.

Another reason not to automatically run JavaScript is a common Facebook malware attack – the click-jacking survey scams which pop up several times a day. Click-jacking is a specialized attack vector on Facebook which work by having the victim click on a link – which leads to a survey – and also “spams” the link as a status post from the victim. If you run with JavaScript enabled you’re usually taken straight over to the payload page – which is typically a survey… but it might be something worse.

By not running JavaScript I get stuck on the interstitial dispatch page; this is where the Facebook click-jack link leads; and this page contains various JavaScript functions to identify the victim. Typical contents of these pages include a bit of geolocation which is used to decide which survey to play. From time to time, I see ones where the dispatch code includes a mechanism to reject the entry if location appears to be in .ru, .ua, .by or .ge  – authorities in these countries only track cybercrime if local users are affected. Generally speaking, if the interstitial page contains the ru-ua-by-ge code, the payload page is loading something other than a simple survey.

But security isn’t the only reason to avoid JavaScript.

Second two-word answer: Existing Investment.

This probably comes as a shock to many web designers – but companies don’t rush right out and buy the latest technology just because it got a great writeup on reddit or slashdot or wherever, or even if it’s the best seller on Amazon or the Apple store. There are a lot of systems out there with no capacity to execute JavaScript (embedded devices) or where internal policies discourage its use. I’ve been writing web-apps for more than a decade which require no JavaScript or even cookies on the browser in order to maintain state… and I know that some of these clients are not going to change those devices or policies for at least several years. Have you discarded your car simply because its OBDC works at a glacial 1200 bits/sec on a serial port?

Not executing JavaScript allows me to see how these clients perceive the “outside world” and thus better understand their mindset. It is very interesting to see which major companies’ websites are still functional without JavaScript (although not all the bells and whistles may work).

The amazing light-ness of seeing

I’m on a photography kick. So shoot me.

 

Photography is all about the light – recording photons for posterity.

 

When you take a photo is sometimes even more important than where you are. Same place, close to the same angle… which is more pleasing?

or ?

I know which one I’d pick.

All that changed was what time of day… one is in the “golden hour” – so-called because it’s when the light is truly golden in color – near sunset. There is a thinner golden segment early in the morning, especially in the spring and fall when the sun is angled – but I hesitate to call it a golden hour… more like twenty minutes or so.
Here are a couple of first-light images…

A matter of focus

Which of these photos is in proper focus:

1. front focus

or

2.

??

In this case, focus becomes a matter of preference.

What we’re really dealing with is another issue, Depth of Field (DoF) – that portion of a photograph which is perceived to be acceptably sharp.

Depth of Field is one of those “advanced” topics in photography… until you grasp how it works, and how to control it. (Note to the purists – this discussion is henceforth simplified)

Controlling DoF is all about controlling the three variables which directly affect it –  lens focal length,  lens aperture and camera-to-subject-distance. All three interact to produce differences in DoF. And while your camera may be largely automatic and you may have no control at all over one variable, you’ll almost always have control over one or both of the others.

Lens focal length is usually expressed in millimeters (mm).  Longer is the same as “zoomed in” – short is “wide angle.” Most point-and-shoot cameras have modest zoom lenses of the 3x variety; some cameras have “ultra-zooms” in the 10x to 15x range. Cellphone cameras often have no control at all over this variable. Note it is not necessary to use a zoom lens at least with cameras supporting interchangeable lenses; you can also use “prime” or non-variable lenses… and a prime lens will usually have a bit better control over aperture values. Wide-angle and telephoto are relative values – the size of the sensor (or film frame) determines the useful ranges. Within 35mm photography (and most dSLR systems) 18mm is wide, 200 mm is telephoto, 500 mm is serious telephoto, and 1250mm is wicked close (and the DoF is paper-thin!).

For our purposes, longer focal length (telephoto) produces shallower depth of field and shorter focal length (wide-angle) produces deeper DoF.

Lens aperture is the size of opening of the iris in the lens; i.e. how big a hole the light comes through. This is expressed as a logarithmic ratio in the form of f-stops. F-stops for camera lenses run from f/1.4 at the wide-open (think really huge) end to tiny pinpricks of light up around f/64. As you go up the scale, the amount of light is halved; thus if we take f/1.4 as “full” open, then f/2 (next increment up) allows 1/2 the light, f/2.8 is 1/4, f/4 is 1/8 and so on. Photographers usually reference low f-stop values as “open” (or “fast” – as it gathers the light in faster), and high f-stop numbers as “closed” (or “slow”).  Zoom lenses are limited as to how open they get; f/2.8 is a very fast zoom (wide open) and f/8 is rather slow (closed down).

For our limited interest in Depth of Field, the rule is this: for a given focal length, depth of field is reduced as you open the lens, increased as you close the lens.

Camera-to-subject-distance is self-explanatory… isn’t it? In the photos above, camera-to-subject-distance is the variable which has changed – I shifted the subject from the up-close gun barrel to the more distant aircraft. Focal length stayed constant at 50mm and aperture at f/10. If I’d backed off a bit (probably about 10 feet would have done it) both subjects would have been in focus… but that wasn’t the effect I wanted!

Let’s look at these two photos, shot back-to-back. The closeup is at f/5.8 and 300mm focal length; the wide-angle view at f/4.5 and 75mm.

player-closeup

player-wide-view

The increased focal length more than makes up for the slight closing of the lens; note how blurred the background is compared to the second photo. They were taken back-to-back, about ten seconds apart.

A milestone reached…

Post #25, which to Automattic, means they shall now unleash the automated hounds-o-advertising and try to convince me to “upgrade to pro.”

Not this week. Sorry.

And now for something completely different… over the next several posts, the focus of the blog will change. (Focus? He’s got focus? Yep – we’ll prove that).

Thus endeth the short post #25.

Keeping up the pace…

Starting tonight the assignment for the 232 crowd (web architecture) is to build a blog on a hosted platform, and update it three times a week.

If I assign it, I should be able to do it.

Famous last words, but perhaps not.

The in-between-class question today has centered around hosting providers – which will be the subject of a homework assignment, I think… but not just yet. First we have to cross this bridge – getting the first “real” content up (as opposed to “un”-real, which is how I classify Google-Sites content).

Initially, student blogs will be linked to the class webpage inside the  college portal. Upon approval by students, selected blogs may be featured as links from this blog… but again, only with explicit approval of the affected student(s).

Almost time for class…

Hacking HTTP via GET; part the second. (finally!)

When I left off (Hacking HTTP via GET; part the first) with this subject, I demonstrated the basics of “hacking” via modifying parameters on a GET method.

But what of methods? And why GET? and what else is there?

A method is the subroutine (or function or procedure or whichever semantic construct you prefer) which is bound to an object (or class); and is executed (or performed) whenever an instance (copy) of an object is encountered. Or so sayeth the oracle of the Wikipedia.

In the case of the web, wherein we are in a stateless protocol (that is, there is no implicit memory of what came before), the protocol itself defines a group of “methods” – or actions to be taken.

The currently-defined (HTTP 1.1; RFC 2616) methods are: GET, HEAD, POST, PUT, DELETE, TRACE and CONNECT. For our purposes, the methods of particular interest are GET and POST.

Why GET? Because it’s the basic, easiest-to-comprehend (and generally program) method when data needs passing from the client to the server. When you go to a web page such as the homepage of this blog, your browser sends a GET request with a parameter of “/” (root, or whatever is aliased to root).

GET has an attribute(/feature/flaw) of displaying all the parameters requested as part of the URI.

It’s this behavior that makes it possible to “hack” via the GET – the parameters are exposed, and thus changeable before the request is sent. It’s also this behavior which makes GET the most popular way to send parameters – it’s much easier to debug! And there is a deeper more technical reason as well; buffering on the server side is handled by the web-server software, not the applications program.

POST is the other method for sending data; the authors of HTTP 1.1 thought most forms would be handled via POST requests. POST hides the data being sent – and is capable of handling much larger objects than is GET. But it is significantly more trouble to program for a POST method, and debugging is a bit more “interesting” as well.

Of the other methods, HEAD is widely used – it requests and receives header and meta-information about a resource, and is often issued by browsers simply to check if the server version is newer than the locally-cached version.

PUT and DELETE are the precursor methods to WebDAV (web-basd distributed authoring and versioning) but are rarely encountered; TRACE is a debugging method and CONNECT deals with proxy tunneling.