Intel has had a rough few months with the release of the Spectre and Meltdown security problems. On January 25, the company announced plans to seemingly put people at ease with the idea of their processors being more secure in the future. They have shifted blame for the issues on hackers – which is valid to some degree, but hackers were only able to gain access to devices thanks to security flaws that probably shouldn’t exist in the first place. From a corporate point of view, it does make sense to shift blame away from the company, but owning a mistake is never a bad thing. Intel does risk coming off as a bit arrogant in deflecting any blame. In particular, stating that OEMs simply can’t test for every scenario in every environment comes off quite poorly.

While that is true, the concept of creating future-proof processors and other computing components remains possible. AMD hasn’t had nearly as many issues with its processors, and they’re an OEM in the same sense as Intel. They also stated that updating drivers and firmware is more important than ever before, which is definitely true. There are some things that PC users just need to get used to doing that they may have never had to do before. Regularly updating security software and device drivers is easier than ever before, but also a step that a lot of older consumers may not feel comfortable with and younger users may feel they don’t need to do because they think they know enough to avoid running into problems.

Meltdown and Spectre

Consumers have come to expect that when they buy something, it will work perfectly for the lifetime of the product without them having to do anything for upkeep or maintenence. While this isn’t a realistic scenario for any major piece of electronics now, it is still something customers buy products expecting. A smartphone or tablet should work just as well on day 700 as it did on day one, and a PC should be as fast after two years as it was when they bought it without doing anything. PC users have grown accustomed to OS updates requiring restarts and it’s not something that generally annoys people too much. You set it to update before you go to bed, sleep, and when you wake up, you’ve got a freshly-updated OS and a few screens of information to look at with bleery eyes.

When it comes to drivers, many users either go with software that lets them know if they have outdated drivers or they do nothing. The catch-all software isn’t the worst idea, but it does usually miss out on at least a few drivers and isn’t perfect. Manually updating every single driver is a pain, but worth it when it comes to anything involving security. The internet isn’t just a research tool anymore – it’s something everyone who can use it makes use of for communication, banking, and purchases.  Over the last 20 years, the internet has become woven into our everyday lives and enables work to to be done at a faster rate of speed than ever before – while more complex tasks can be completed easier thanks to having so many sources of information available at our fingertips.

The company definitely seems dedicated to making sure it doesn’t suffer another TPM vulnerability issue, but the fact that their plan for security updates is accelrated is worrisome. They at least acknowledged that the issue needed a quick response and did so by accelerating things and then learned that the patch could cause problems for certain applications. The fact that their plan for the future still involves speeding things up is worrisome even with their newly-announced critical response process.

Their multi-step plan starts with risk analysis, continues with velocity, and then moves onto depedency mapping. What this means on a practical level is that they will analyze known vulnerabilities and look at the best way to mitgate them to get updates out. The velocity approach involves them looking at the overall health of the security update before releasing it. One potential issue is the company defining different levels of velocity based on the perceived severity of the issue. This could mean that a low-level issue gets a low-end update quickly and then that update isn’t enough and leads to a larger-scale problem. They do at least plan to go through thorough testing regardless of the severity, but time will tell how solid this approach winds up being.

The final step is to map drivers and firmware before sequencing the updates before they go live. They also plan to bundle the patches together to make sure everything is updated in the correct order.  This part honestly seems just fine and outside of something like a patch just getting stuck mid-update, doesn’t raise any red flags in theory. The fact that they talk about wanting to save their own company from losses in productivity with this approach does seem odd though. It makes the company seem like they’re more concerned with keeping their own costs down in the short term – and while that might make an individual quarter look better to have a labor budget cut by 3%, if it leads to the company going through more issues like it has been for the last few months, it definitely seems short-sighted.

Hopefully, Intel has truly learned a lesson and their announced plan is just an easy-to-digest summation of what they truly plan on doing. If it is, and they actually plan on doing far more than this for their processors going forward, then things should be fine. As it stands, their plan seems more reactive than proactive and that doesn’t bode well for either the company or the users of its products. Intel finds itself in a tough spot in 2018 – they had a rough year due to security issues and product flaws in 2017 and while their partnership with AMD for a handful of chipsets is promising, it isn’t going to be enough to remedy all of the damage their brand has sustained. If Intel makes better moves this year than last, they’ll be fine for the foreseeable future – but they can’t become complacent. If they do, then they’ll suffer more problems year after year.