Bash Browser

Unannotated and unformatted, this is a one-liner that can fetch a file via HTTP (assuming no authentication or proxying is needed). It was written to show that a basic Linux From Scratch system does include all you need to do this, since all you need is bash.

It is presented here as an example of how bash's tcp socket handling can be used. As written, it'll download links-2.1pre1 for you, then you'll have a real browser to go download new things with.

(echo -e "GET /~clock/twibright/download/links-2.1pre1.tar.bz2 HTTP/0.9\r\n\r\n" \
1>&3 & cat 0<&3) 3<> /dev/tcp/ \
| (read i; while [ "$(echo $i | tr -d '\r')" != "" ]; \
do read i; done; cat) > links-2.1pre1.tar.bz2

To explain how this works, I'll first explain what it's trying to do, then break it into sections to show how it accomplishes that.

To download something by HTTP, you must connect to the server, send an HTTP request (such as "GET / HTTP/0.9") followed by any other HTTP headers you wish to specify, followed by a blank line. The HTTP server should then respond with a status code and explanation (eg. "200 OK"), some other HTTP headers, a blank line, then the content.

In bash, it's quite easy to open a network socket, since it presents a virtual file interface to them at /dev/tcp/<server name>/<port>. So we could echo the HTTP request to /dev/tcp/ to contact on port 80. However, each separate operation on a /dev/tcp file creates a new connection and closes it at the end. We need to keep the same connection and read the response from it.

This is accomplished by creating a new subshell, and setting up fd 3 to read and write from the socket:
( ... ) 3<> /dev/tcp/server/port

Now the two commands in the subshell must run concurrently, so the echo is followed by an ampersand to have it run in the background, and its output is redirected to fd 3 (1>&3). Meanwhile, we use cat to take input from fd 3 (0<&3), and show it on stdout.

The output from this whole section is the complete response from the server.

The rest is pretty easy; we pipe the output of that into a loop which reads the HTTP headers until there's a blank line, then uses cat to pass the rest through, and redirect this to the file.

Of course, there's no error handling; if the server returns a "404 Not found", the file will contain the HTML for the error page, rather than the downloaded file, but it IS just a one liner, after all.

Posted in