cancel
Showing results for 
Search instead for 
Did you mean: 

Webmail Incident Report - Follow-up

Webmail Incident Report - Follow-up

Webmail Incident Report - Follow-up

Following-up on the webmail incident report, we pledged to provide answers to any remaining queries which the original report didn’t address. We have answered as many of those questions as we can here, although primarily for reasons of ongoing platform security there are a number of questions asked that we have not provided direct answers to. Since May, we have been focused on a 90 day plan which we formulated following the webmail incident. This combines a number of sub-projects, some of which were already in progress before the event (Such as ensuring PCI compliance), and others that have come about as the result of the adoption of harder security standards across our operation. This has ranged from internal doors being locked down with biometric access, to the rebuilding of servers and other network elements with a view to application standardisation and code consolidation (which will have the added benefit of making future developments simpler and more robust). The project also includes publishing our data retention and privacy policies and ensuring these are implemented fully in all parts of our operation. In terms of the questions we were asked, we have broken these into the following areas: Questions about events leading up to the incident The Vulnerability Stored Data Technical Questions about the impact of the incident Questions about our initial response to the incident Questions about changes we will make in our operation as a result of the incident Other Questions

Questions about events leading up to the incident

The Vulnerability

We were asked about the specific vulnerabilities that allowed the hacker access data held within the webmail system. The vulnerability was not unique to PlusNet’s modification of the @mail code, although our specific implementation made that vulnerability more serious once it had been exploited. At the time the incident occurred we had patched the webmail system with all the security patches known or available for @mail. The nature of the exploit suggests that the attackers were already familiar with the @mail code and database structure. Coupled with the fact that we allow anyone to create a free webmail account, and to access that globally, it was possible for the attackers to get access to our webmail platform. Most other implementations of @mail, in one way or another are more restrictive about who can access them. The attack took place entirely via the website and web-server, however we are unable to publish more detail about the specific method used except to confirm that this was not an XSS based attack. As we said in the incident report, whilst the initial exploit was something that we don’t think was easily preventable, the resulting impact could have been reduced with different technical and procedural measures. We have now implemented these measures, but will not be publishing specific details for security reasons. Whilst no network can be 100% secure, we regularly operate internal security tests and have used external security companies to perform penetration testing and external audits, the last one of these before the incident was in January 2007.

Stored Data

In 2004 as part of replacing our old home grown webmail application we imported all of our customer email addresses into the new @mail system. The previous webmail software had used customer contact email addresses by default. We wanted to ease the transition for webmail users to @mail and as a one off exercise we imported the contact addresses of existing customers to the new system. In hindsight we should have forced customers to set up their details again, but at the time we felt this would cause unnecessary inconvenience. It was only customer email addresses and contacts that were imported into the webmail system. For our implementation of @mail, we made a decision to keep the servers entirely separate from the rest of our systems. While on one hand this limited what the attackers were able to access, the separation also resulted in changes to our main databases and mail systems never being reflected within the webmail platform itself. The outcome of this was that the details for accounts which were cancelled or mailboxes which were deleted were not removed from the webmail database. It would however have been impossible to access webmail without an active account, as authentication is performed against our main database.

Technical Questions about the impact of the incident

We’ve been asked for details about precisely what data the attackers were able to access. Although we can’t be precise, we must make a presumption that anything stored within the @mail databases could have been taken. As described in the incident report, the databases contained customer address books, email addresses and in certain circumstances email content. Specifically, this was email not stored in the default inbox for customers who logged into webmail using the ‘POP3’ option. Mail accessed using the IMAP option was stored on our main email servers, rather than within the webmail database. No other data was stored on the webmail platform which could be regarded as customer specific information. Between the time when the server was first compromised and the issue was fully resolved it is possible that other information could have been accessed through customer interaction with the affected server. We have also been asked to clarify how authentication took place on the webmail platform and whether it was possible that password data could have been obtained. No evidence to suggest this is the case has come to light, but for the above reasons it remains a possibility (which is why we advised customer to change their passwords). Authentication of mail collection occurs via our mail platform and these servers were not affected.

Questions about our initial response to the incident

The first tickets relating to the incident were raised on the afternoon of Saturday 5th May. They entered the Customer Support ticket pool and were dealt with in normal rotation order (oldest first), meaning we picked them up the following day. Please refer to the original incident report (opens in new window) for further details of the timing of events. When we identified the Trojan redirect on the WM04 server, we checked the other Webmail servers for a similar compromise (and found none) and also checked for other malicious activities at that time. We took down and investigated what we believed to be the only compromised server and not finding anything to suggest a further compromise had taken place we returned the remaining webmail servers to full service. It’s important to understand that running a virus scan or detecting a compromise on a unix server is a different thing entirely from the virus checking one might perform on a windows PC. Although we felt we could make a quick fix and return the webmail service with minimal inconvenience we did continue to monitor the webmail platform and as soon as we realised that all was not well we made the decision to permanently remove the @mail platform from service. During the early days of the incident our priorities were to understand and solve the webmail problem. Once we knew who had been affected we moved as quickly as we could to inform these customers about the issue, in hindsight this was not fast enough and in future we would be in a position to react faster. Initially we used the signature detection technology offered by our Ellacoya platform to identify customers who exhibited the signs of having been affected by the Trojan, and whose data transfer profiles matched the Trojan’s signature. We phoned some and emailed all such customers with specific instructions. Although we communicated with the relevant authorities throughout the incident, we didn’t formally raise the issue with the police until the 16th May. By this time our Incident Team had carried out a full and thorough forensic examination. It was only when we had sufficient evidence to reasonably suspect that a criminal act had occurred that we were in a position to report the crime.

Questions about changes we will make in our operation as a result of the incident

We’ve been asked about why customer data was stored even when an account was closed and what we are doing to prevent that. As part of our 90 day follow-up plan we have conducted a full audit to ensure that there are no other areas where we are storing unnecessary data, and that only the information we are required to keep by law (under the Limitations Act and HMRC legislation) is retained anywhere on our systems. Until this point we maintained closed account data in a ‘destroyed’ state, which although fully compliant with law, only saw periodic purges of our archives to remove old data. Our data retention policy will be published shortly on the policies section of our website. Ensuring that all of our systems are synchronised and not storing unnecessary information will be at the forefront of our minds during the design of all future projects. A lot of questions about the new security team were asked and although we do plan to explain more about their remit in future communications, we will not go into the specifics of our security policies and procedures. The new team will enhance and augment our existing procedures for performing security audits and penetration testing. They will not be responsible for answering customer tickets directly, which is a responsibility that will remain with the CSC. If an issue is raised relating to a potential security concern these will be escalated to the most appropriate team. Customers can be sure that we will always treat any further suggestions of a security problem very seriously. Other than that, we are nearing the completion of work on delivering most of the seven commitments we made in the incident report. Of these, all work is on track except encrypted email and FTP access, which is taking longer to deploy than we first expected. We will have a further update on all of these deliverables, along with more details about the steps we are taking to combat spam in general in an additional update. Our response has focussed on current PlusNet customers, and while we accept that we have not been able to address the inconvenience caused to those who are no longer customers, we would again like to offer our most sincere apologies to all. On the back of this incident we have become determined to do absolutely everything we can to ensure that the security of our platform can never be questioned again. We have adopted a robust set of security standards and continue to focus almost all of our network and development resources on the 90 day security project which ends 20th August.

Other Questions

A few questions asked were not directly related to the webmail incident. One of these was our recent entry onto the data protection register. It is in fact the case that we are not obliged to register under the terms of the Data Protection Act, although we are obliged to comply with the requirements of the Act. Our registration was actually in hand prior to the webmail incident, and was purely voluntary. The other question was about the length of time it took to implement stronger passwords for customers. This was no small piece of work, and as with all of the changes and improvements made during this period we had to drop all other development work to make this happen. We hope that this final posting has helped to allay remaining concerns about what happened and why. Everyone at PlusNet remains very conscious of the inconvenience and concern that arose as a result of these events. We are now focusing on reducing the amount of spam email that our customers are receiving and more information about these initiatives is being published regularly on our community site.

0 Thanks
0 Comments
781 Views