Visit Modern Campus

Building Your Team: The Ins and Outs of Higher Ed IT Security

The EvoLLLution | Building Your Team: The Ins and Outs of Higher Ed IT Security
If an institution is truly committed to system security, leaders must focus on hiring teams that truly understand the ins and outs of security.

One would think that a university environment would be on the cutting edge of everything. It turns out that things change very slowly, if at all. As I usually say, everything moves at glacial speed.

Security, however, is something that cannot change slowly. If you are running the same exact security program that you were running last year, I would be willing to place good money on your enterprise being hacked in one way or another, whether or not you’re aware of it.

I have been building the security program at Columbia University for over fifteen years and I can say that during that time, many things have changed, and continue to change at an ever increasing rate. One of the things that I see in education is that they often “cheap out” on sending their staff to conferences. I am of the opinion that in order to be effective, a security professional needs to be a continuous consumer of information. I go to conferences, not with the hope of coming back with all of these new ideas, but with the hope that I will hear little or anything that is new to me. I figure, unless I can stay at the top of my game, the bad guys will win every time. A sad but true fact in security is that they (the bad guys) only have to be right once, while we have to be right all of the time.

One often hears that security should be done in layers, and then they proceed to explain about all matter of magic bullet technologies—including firewalls, intrusion prevention systems (IPSs), intrusion detection systems (IDSs), endpoint antivirus (AV), security information and event management systems (SIEMs)—that will solve all of your security problems and that various vendors will be really happy to sell you.

I can save you a lot of time, aggravation and money by telling you that none of these devices will solve all of your problems (not that I expect you to believe me J). My best advice is that you need to invest in the security process, not the technology. After all, depending on technology alone to solve your problems is long and very bumpy road to disappointment.

One of my favorite investments in security is people, and not just a “security technician” who sits in front of a terminal all day and waits for the system to point out a problem. I like to hire security programmers who understand security and can build custom applications fit to our environment. For example, we do not run an out-of-the-box AV/anti-spam on our incoming mail—we run a very highly customized version of Mime-defang and spam-assassin. Part of the job of our postmaster is to ensure that the code that is running matches to the bad stuff coming in. We have many (in the hundreds) custom anti-phishing rules that remove most incoming phish. We run what I fondly refer to as “sledgehammer AV” —any incoming mail that contains an executable of any type (including in a zip file) is blocked. This saved us from a nasty zero-day last week that an affiliate (running their own mail system) got hit with. I believe that mail is one of the biggest vulnerabilities an organization has—it is the biggest source of social engineering. Without the ability to make filter corrections on the fly, you are placing all of your systems at risk.

One of the assumptions we make is that any system at Columbia can be compromised. Our philosophy is that all computers must protect themselves from any other machine on the network. In other words, we do not have a secure network environment. We tell everyone that when they plug into our network, they are plugging into the cesspool of the Net. One of the mistakes I see is by implying that your network is secure, you give the impression that users on the network can ignore basic system hygiene, like doing updates, installing patches, running a host-based firewall or AV system. I think that by forcing your users to be part of the security solution, you avoid the problem of mass infections.

Another piece of social engineering I’ve introduced at Columbia is the idea that users are responsible for their own systems. This translates into, when a machine becomes infected, it is up to the owner to fix it. The reasoning behind this comes down to kids. When my kids were growing up, I noticed that unless there was a little bit of pain involved, there was very little learned from a bad experience. I applied this to our kids at Columbia—when a computer becomes infected, we remove the machine from the network and the machine owner gets the pleasure of rebuilding it. I find that after the second or third time, the owner becomes very adept at security. The software that detects the compromised systems and removes them from the network was written by one of those programmers that I hired!

I’m of the opinion that in order to have an effective security program, you need to assume that the program that you are running today will not be the program that you will be running six months from now. This does not mean that you need to scrap everything and start over, but never hurts to look at what you’re doing and ask yourself “How is that working out for you?” Never be afraid of the answer and if you hear back “Not so good,” be excited about the opportunity to revamp, repair or replace a system that was solving last year’s problems.

Author Perspective: