Thursday, August 28, 2008

Halfwitted journalist thinks only rich people should blog

This Washington Post writer suggests that blogs are not only unimportant, they are actively harmful to democracy and society. Therefore, the government should make them more expensive, so that regular people can't afford to have them.

And if that wasn't already lunatic enough, he proposes to do it by a massive energy tax.

This is so awesome I thought for a minute it was satire. Dusty Horwitt, pissant little environmentalist, lawyer, and journalist (and Bill Clinton impersonator, on the side) thinks blogging is the death of democracy.

Apparently democracy means a few rich people talk, while everyone listens. You... yes you... have the right to shut the hell up and be a good audience. Stop blogging! You're polluting the infosphere! You're killing newspapers, communities, democracy, and the environment, and if you're in the U.S., you're forcing jobs overseas! All by creating an "information avalanche". God knows we should all beg to go back to the days when we apparently sat around getting a political "education" from notorious anti-Semite and Hitler fan Charles Coughlin... as Horwitt suggests.

Here's Horwitt's argument:

1) The proliferation of blogs make it impossible to find relevant information
2) The only important information is "politics and news"
3) The Internet hurts newspapers
4) And Democracy!
5) Fragmented media outlets fragment society!
6) Personal computers, and data centers, are bad for the environment
7) Therefore, create an energy tax so regular people can't afford to have computers, and can't blog! That way, they'll shut up and listen to proper sources of information and become "educated".

So, regular people shouldn't have access to computers. This is a new one on me. Let's increase the digital divide, to.... empower people! and to Make America Great!

Rather than call for government regulation of technology itself, perhaps the best way to limit the avalanche is to make the technologies that overproduce information more expensive and less widespread.


Horwitt doesn't consider for a second that the rest of the world will still be posting, even if the U.S. makes it hard for people to have access to computers.

This is the best part,

It's possible that over time, an energy tax, by making some computers, Web sites, blogs and perhaps cable TV channels too costly to maintain, could reduce the supply of information. If Americans are finally giving up SUVs because of high oil prices, might we not eventually do the same with some information technologies that only seem to fragment our society, not unite it? A reduced supply of information technology might at least gradually cause us to gravitate toward community-centered media such as local newspapers instead of the hyper-individualistic outlets we have now.


Horwitt seems to be an amateur comedian and songwriter. You'd think his attempts at comedy would help him write a better satire if that's what this is supposed to be. Or is it all an elaborate hoax, an enormous troll? He's also an "analyst" for the Environmental Working Group. Wonder how well his analysis is informing the EWG? I like this bit of their site - "How EWG Does It: Our research brings to light unsettling facts that you have a right to know." Hmmm. You have a right to know... What the likes of Dusty Horwitt want you to know. Apparently you don't have a right to speak.

The Washington Post just argues against Horwitt's big point that newspapers deserve to survive at all -- by having published this massive piece of bullshit. This is the legitimacy of mainstream, traditional media?

It'll serve this dude right to be mocked on as many blogs as possible. He thinks viral information doesn't work? Maybe a million pissed off bloggers will let him know otherwise. God, if only I could find his personal Myspace... I'm sure it's comedy gold.


Digg this

Sunday, August 24, 2008

Call for submissions: WisCon Chronicles, volume 3

Call for Ideas and Contributions
WisCon Chronicles Volume 3 - WisCon 32



Were you at WisCon 32 in 2008? Aqueduct Press would love to hear from you with ideas and materials for Volume 3 of The WisCon Chronicles.


ANY panel, event, or paper you'd like to write about is fine.

Here are a few that we've noticed people talking about:


* Maureen McHugh and L. Timmel Duchamp's Guest of Honor readings and speeches
* Women and Hard SF
* Elves and Dwarves: The Racism Inherent in Fantasy
* Fanfic and Slash 201
* The Battlestar Galactica panel
* The Eclipse One Cover Debate
* Not Just Japan: Asian Science Fiction and Fantasy
* Writing Working-Class Characters

We'd also like to see writeups of your hallway conversations: What fantastic discussions did you have in the interstices? In the hallway, in the lobby? At parties, at dinner, in your room, or online?

If you were at WisCon and would like to participate — to offer ideas or to submit an essay — please get in touch with us. Don’t be shy.

If you were blown away by a WisCon panel that we haven’t mentioned and would like to see its ideas expanded upon in The WisCon Chronicles, Volume 3, please let us know. Tell us the name of the panel, which participants (including audience members) most engaged you, and what was valuable to you about the discussion. What was thought-provoking, inspiring, enraging, hilarious, worthy of deeper discussion? If you’re interested in writing an essay on the topic or contributing to the book in some other way, let us know.

Please query before writing an article. If you want to submit an article or essay, please email a query or proposal by September 15, 2008. (The earlier the better.) The deadline for the submission of finished essays will be October 15, 2008. Text in the body of the email is preferred, or rich text format (.rtf) files as attachments. We’re looking for essays of 800-3000 words. If your submission is published, you will receive a small payment and a copy of the book.

Feel free to forward this call for submissions!


Thanks,
Liz Henry
email ideas and submissions to: liz@bookmaniac.net

Digg this

Tuesday, August 19, 2008

Learning Python by writing a screen scraper

Just for fun I decided to write a tool for work in Python instead of Perl, and I thought I'd describe the process. Partly because other people can be very opaque about how they learn things, and especially how they learn technical things or approach something unfamiliar.

At work, I had a big list of URLs to look through, to check a particular detail of sidebar widget code on each blog.

Originally, I got kind of excited as I browsed this page of twill commands. I installed twill on my laptop and tried out the examples. Wooo! That was so easy! It was like screen scraping with pseudocode. So, without really looking into it any further I figured it would be easy to churn through the few thousand URLs I needed to check. It looked like maybe a day of work, or two if I was floundering around, which is often likely when learning something new.

When I actually sat down to do it, I realized that twill didn't have any control flow commands. So there wasn't a way within twill, fabulous as it seemed, to tell it to go through a list of variables. So I started to try to write it in Python. I went to a few of Seth's informal Python lessons months ago and then paired with Danny a little bit. We wrote a thingie with Django to let people create random Culture names. (Result: I am the Human Sol-Terran Elizabeth Badgerina Karen da'Champions-West. I am currently travelling on a Ship, the ROU Knock Knock, Psychopath Class.) In other words, as of Monday, I could write Hello World in Python, but only if I look at the manual first.

So I wrote pseudocode first, like this:


read in biglist.txt

for each line in biglist.txt
split the line into $blogurl, $blogemail

get that url's http dump
if it has "OLDCODE" in it
Write $blogurl, $blogemail to hasoldcode.logfile

if not
if it has "NEWCODE" in it
write $blogurl, $blogemail to hasnewcode.logfile


Pseudocode rocks.

The Python docs made me want to chew my laptop in half. I read them anyway, mostly the string functions but that barely helped. Instead I just brute force googled things like "how do i find a substring in a string with python" which was often helpful if only so that I felt less lonely. It took me a while just to figure out how to concatenate two strings. I kept trying to use a dot, which didn't work!

The most helpful thing was turning up bits of other people's code, the simpler the better, with brief explanations.

The next most helpful thing was cheating by asking Danny, who kindly just said "Oh, do this, import urllib and do f=urllib2.urlopen(blog) and then h= ''.join(f.readlines()) and print h." Testing things out in the interpreter was useful too. The point isn't whether you lift it out of a class, a book, someone else's code, or another person. Just start out with a few lines of something that works. Then, fiddle with it and master it.

But there was no way I could figure out from scratch what to do with the first giant horrible alpha-vomit error that popped up. Danny kindly IM-ed me "try: THINGIE except ValueError:" which I then googled and figured out how to use. Some googling of bits of the error message would have gotten me some useful examples. Here, I may have chickened out and asked for help too soon.

I put my pseudocode into a file as comments. I wrote a file of a few test urls and emails. Then I wrote the program kind of half-assedly a couple of times, piece by piece, only trying to do one thing at a time. It mostly worked. That was the end of Day 1.

In the morning life was much better. Then I rewrote my pseudocode to include all the things I'd forgotten. I threw everything out and started over. Suddenly I felt like I knew what I was doing. A couple of hours later it all worked great.

While I was in that state of knowing-what-I-was-doing and being able to see it all clearly, it was exactly like the point in writing a poem where I know the map ahead. It is knowing the map and how to navigate and having not just the destination but having built a mental image of the entire trip. So, as with going down a road where I've never been before, but have imagined out the map, I feel a sense of the entire poem, all at once. It's a holographic feeling. In that state of mind, I am very happy, and want to work without stopping.

It's funny to talk about such a simple bit of programming that way but that's how it felt. I also knew where I was doing something in an inelegant way, but that it was okay and I'd fix it later.

When it was all working I paused for a bit, then went back and fixed the inelegant bit.

After that, I put in some status messages so I could watch them scroll past with every URL. (A good idea since the error checking is not very thorough.)

Here is the code. I can see more ways to improve it and make it more general. It would be nicer to just use the filename as the scrolling status message, perhaps giving the files names that would look better as they scroll past. There are also stylistic questions like, I know many people would combine

outfile = open("gotsitedown.txt",'a')
outfile.writelines(msg)

into one line, but I couldn't read that again and understand what it meant a month from now, so I tend not to write code that way. Maybe once I'm better at it.


1 #! /usr/bin/python
2 import urllib2
3
4 # This reads in a list of urls & emails, comma separated.
5 # It checks each url for a specific phrase in its HTML
6 # and writes the url and email to a log file.
7 # The status print lines are for fun, to watch it scroll.
8
9 lines = open("biglist.txt",'r').readlines()
10 for l in lines:
11 line = l.strip()
12 try:
13 (blog,email) = line.split(",")
14 except ValueError:
15 continue
16 try:
17 f = urllib2.urlopen(blog)
18 h= ''.join(f.readlines())
19 if 'NEW BIT OF CODE' in h:
20 filename = "gotnewcode.txt"
21 status = "New! "
22 if 'OLD BIT OF CODE' in h:
23 filename = "gotbothcodes.txt" # replaces filename!
24 status = "Mixed up codes: "
25 elif 'OLD BIT OF CODE' in h:
26 filename= "gotoldcode.txt"
27 status = "Old code: "
28 else:
29 filename = "gotnocode.txt"
30 status = "No code here: "
31 msg = blog + "," + email + "\n"
32 outfile = open(filename,'a')
33 outfile.writelines(msg)
34 print status + msg
35 # check for 404 or other not found error
36 except (urllib2.HTTPError, urllib2.URLError) :
37 msg = blog + "," + email + "\n"
38 outfile = open("gotsitedown.txt",'a')
39 outfile.writelines(msg)
40 status = "Site down: "
41 print status + msg


My co-worker Julie was sitting across the table from me doing something maddeningly intricate with Drupal and at the end of the day she agreed with me that it is best to code something the wrong way at least twice in order to understand what you're doing. "If I haven't done it wrong three times, something is wrong."

I wish now that I had written down all the wrong ways, or saved the wrong code, to compare how it improved. One wrong way went like this: instead of writing the 5 different logfiles of blogs with new code, old code, no code, site down, and mixed old and new code, I thought of making directories and writing all the url names to the directory. That was before I thought of putting the emails in a csv file with the urls. Why did it make sense at first? Who knows! It might be a good rule, though. The first think you think of is probably wrong. You can't see a more optimal way until you have walked around in the labyrinth down some possible yet wrong ways.

This WordPress Code Highlighter Plugin might be the inspiration that pushes me finally off of Blogspot and onto WordPress for this blog, so that I can do this super nicely in text and not in an image.

Digg this

Thursday, August 14, 2008

Excite@Home bankruptcy trainwreck continues in slow motion

I cannot believe that I'm still getting legal notices about the Excite@Home bankruptcy and the pay they still owe me from October 2001. For real? Seven years of an utter waste of time and resources. How many lawyers are frittering away their lives and raking in the dough on this bullshit? They gave me most of my money long ago, from when they bounced all our last paychecks.

It stirs up my ire to get these snail mails, sometimes big fat packets of totally pointless legal documents. Some freaking genius should have made a webpage about a million years ago, for creditors (like me) to keep an email contact updated and they'd be able to pay us all that much more in saved postage costs.

Mostly though, I remember surviving the rounds of layoffs, reading Fucked Company every day along with all my co-workers, while then enduring the incredibly wankery company-wide pizza and beer meetings in the "garage" with endless power point slide shows about how great we were doing, that were obvious lies.

And the way they'd do some weird NASCAR event and whoop it up as if that was going to solve all our problems because it was cool.

The one satisfying thing was when they axed a couple of buildings after one brutal layoff, I took a whole lot of the office stuff and furniture they were throwing away and hauled it in my truck to donate it to the nearest elementary school, where second grade teachers fought like tigers over staplers and pairs of scissors. Where is the justice. They were desperate for petty office supplies while five blocks away a bunch of us basically wasted oxygen reading Fucked Company, downloading shit from Napster, and sighing bitterly as we waited for the axe to fall. *

There were some nice people at Excite (aka WebCrawler) that I worked with, but man, that company was so clearly going down. It was sad.

Anyway, everyone keep in mind if you need to get riled up, that there's a bunch of really rich bankruptcy lawyers sailing their yachts around and enjoying their home movie theaters, still reaping the rich rewards of the dot-com crash.

* Note to future employers, actually I am a super hard worker until everyone around me has been laid off and doom is in the air.

Digg this

Monday, August 11, 2008

Joy of unit testing

(From about 2 weeks ago, late at night)

I was just vaguely napping and realized I was still thinking in my sleep about the php code I had just been writing. Though I barely even know php at all, it wasn't that hard to just guess at it because it was mostly like Perl. My thinking in Perl is a bit stuck. Today with Oblomovka I wrote out what I wanted my program to do, then he started writing tests. At first I didn't get it that the tests didn't run actually in the program. My thinking was inside out. I thought I'd run a bit of code, then run something that tested if it did it right, or that error/die statements would be sprinkled around. But as I saw what Oblomovka was doing it was like a light went on and I felt like everything I've written has been incredibly sloppy! Works fine, tells you if it doesn't work, but was like wearing shoes instead of making a road. Or the other way around.

It was really fun to write the very simple tests and then figure out how to send it the simplest possible thing to fake it out so that the tests would pass. So for example if you were writing a simulated ball game, you would not start by simulating a baseball game. Instead, you might vaguely sketch out what happens in a game. Then, you'd write a test that goes like, "Does a ball exist? If not, FAIL." You would watch it fail. It's supposed to. Then think of the smallest thing it needs to do to pass. Your program would then merely need to go, "Oh hai. I'm a baseball" and the test would pass. You'd write another test that goes, "Is there a bat?" and "Is a baseball coming at my bat?" As you write fake bats, balls, and ball-coming-at-you actions, the baseball game starts to take shape. All the tests have to keep passing. The structure of how to build it becomes more clear, in a weird way. This isn't quite the right analogy. I can't quite get into the way of thinking and end up just hacking quickly on ahead. But for a little while, I felt the rightness of this way of doing things.



Technorati Tags: , , ,

Digg this