Can we really keep out worms?
An interesting piece from Information Security Magazine takes a look at a range of “antiworm” products which promise to contain worms by weeding out bad traffic. Among them: Mirage Networks, ForeScout, Check Point Software Technologies, Silicon Defense and IBM.
They use different approaches, from looking for unfulfilled Address Resolution Protocol requests, to anomaly detection, while others automatically isolate compromised hosts, the article says. Others redirect worm traffic to a quarantined area to buy time to isolate the worm and keep systems available. Others try to limit the spread of a virush by ‘throttling it’, i.e. limit the number of Internet connections an infected computer can have.
Interesting article, but in the end we don’t know exactly what the next worm will do, so aren’t we back at square one, of always being wise after the event, like all anti-virus software? Or am I missing something?
The missing bit here is that ISPs really need to start doing active monitoring of this stuff. When someone zombies a PC, that PC is going to wake up periodically and do unreasonable things. The most specific unreasonable thing is that it’s going to connect to a lot of different mail servers, or connect to a lot of different bots. The zombie might also try to cram a lot of email through the ISP’s mail server. That’s something that should be easy to scan for. Each ISP needs to take _some_ responsibility for what the computers on that subnet do. If a PC tries to send 70,000 emails through, it should be disallowed automatically, an email sent to the user advising that person that they need to call the ISP and verify that this is what they are trying to do. The very few people who handle mailing lists can easily get an accomodation from their ISP, if indeed they are allowed to mass mail.
Throttling in general is _absolutely_ the way to go for nearly every part of this problem. For connections inbound to a mail server, throttle sources according to trust level. Use a bayesian filter to get a rough gauge of trust; since you’re going to pass everything through anyway it doesn’t have to be terribly accurate. Low trust equals low throughput, and limits on transactions per connection.