With a few exceptions, the majority of books I have read about pen testing treat reporting as an after thought, with a few pages roughly detailing what is expected in the report. What I felt was missing was the actual approach of the reporting process and a solid breakdown of how to structure your findings, so in this lengthy blog post I will attempt to break down the process and provide you with some hints and tips.
That said, a quick disclaimer is required, I don't claim to be an expert in report writing, everything that follows is what I have learnt over the past year in my role as a tester.
Performing the technical aspect of the assessment is only one half of the overall assessment process. Our final goal is to take our findings and produce a well written report which is both easy to understand and also highlights all the risks found. It also needs to be appeal to two different audiences, executive staff and technical staff.
When I first entered the industry I started off doing independent consultancy work to gain some experience. At the time I had very little report writing knowledge and as a result I was guilty of falling into the trap of writing the bare minimum when it came to the reporting stage. My methodology was to run a initial vulnerability scan and then manually check the results with a variety of tools. Soon I found myself with a heap of information that I needed to convey yet I didn't have a clue what to do with it. I simply didn't know how to structure my report and present my findings.
To start with I used a lot of copy and paste from the vulnerability scanner I had used at the start of each project. Whilst the descriptions and explanations provided by the vendor were well written, it made me feel dirty using them in my reports. I wanted to be able to write reports that were clear and concise and more structured.
When I joined a pen testing company, the first thing that I was tasked with was to complete a Windows Server build review and report on my findings in the same style that a paying customers received. Soon I found myself in familiar territory, having a heap of vulnerabilities, asking myself how do I get this into some sort of logical format? Luckily the company had a software solution which included a template to use, however it soon became apparent that it was rather clunky. So, instead of becoming reliant on software to do the job, lets look more closely at how a report is broken down.
Lets begin with a view of the overall report structure, then we will break down each section of the report so that you not only understand the context of the section but also so we can fully understand the content that is expected.
Overall Report Structure
- Scope & Duration
- Credit / Assistance
- Vulnerability Title
- Background relating to the Vulnerability
- Risk Analysis / Risk Rating
- Recommendations to fix
- Affected Item
- Nmap results
- Whois output
- SSL/TLS results
The introduction usually starts with a little bit of history relating to the client and the services that they provide and their reasoning for having an assessment.
Here's an example:
Global Widgets Inc (GWI) is a UK based manufacturer which specialises in the creation of car components. Recognising the value and nature of information stored and processed within their web application, GWI have identified the need to perform a security and vulnerability assessment.
The purpose of this assessment is to test GWI's externally facing infrastructure and eCommerce application in an attempt to identify potential vulnerabilities that could threaten confidentiality, availability and integrity of the customer information being stored.
Testing will provide GWI with an assurance that their current security controls and systems have been correctly implemented and are adequate.
Essentially, you are setting the scene so typically this section will consist of two or three short paragraphs, keep it brief - half a page is more than enough.
Scope and Duration
This is where you detail the phases of work that will be taking place and also the duration of the engagement. It is often a good idea to break the project down into phases especially if its a larger engagement. Breaking it down into phases also gives you the option of how you present your findings. You should also state the start and end date of the assessment.
Phase 1 - Infrastructure Testing
Phase 2 - Web Application Testing
Phase 3 - Web Application Code Analysis
The project commenced on the 4th March 2019 and concluded on the 14th of March 2019.
The scenario section is optional, however if you choose to use it you simply need to expand on your previously written introduction. So what do we mean by "scenario"? Simply the type of test that we are conducting, so Black Box, Grey Box or White Box. I find that a short explanation of the clients preferred assessment approach is also a good idea.
Lets say we have been given full access to the source code of the application, we would class this as a "white box test" and our scenario text would be as follows:
Global Widgets Inc have provided full access to the application source code. This high level of access improves the probability that both internal and outward-facing vulnerabilities will be identified and remediated.
Your target will vary from assessment to assessment. You could be tasked with doing a Infrastructure test, a Web Application test or a combination of both. Therefore you will just need to list the URL and/or IP addresses you have been given approval to target.
Thank You's / Credits
In one of my early tests when I first started out, I was given the task of testing the clients infrastructure as well as the security posture of some services hosted on one of the major cloud providers. All of this was handled by a third party and the client had kindly spoken with their supplier so I was able to approach them when questions arose. I found this extremely helpful, so I added a small note into the report thanking both the client and the third party for providing their time and assistance. Whilst its certainly not necessary, it adds a nice touch to the report.
You may be wondering, where do I find all the detail and content for completing the introduction and its subsequent sub sections mentioned above. To ensure a successful delivery of a pen test, you also need a responsible account manager to ensure that all parties have the relevant information to make the test go as seamlessly as possible.
The account manager will have already spoken with the client and agreed the items that are in scope for the project and based upon this information will have set a duration for the length of the test, typically a number of days for testing and a day or so for reporting depending on the actual size of the engagement.
A reliable account manager will ensure that you have all this information prior to the test starting.
The executive summary section sums up the overall findings of the assessment. Essentially, we are providing a high level view of the vulnerabilities discovered. It is important that we plainly state what the vulnerability is and its impact to the business, therefore the language that we use should be less technical. Senior managers usually don't read the whole report as their main concern is find out whats wrong and how to fix it. Then it is usually handed off to the techies to sort it all out.
Start this section by stating what you did, then note the number of issues discovered, before describing some of the key findings, for example:
XYZ Pentesting Company performed an external infrastructure and web application test against the in-scope IP and application addresses. During the assessment, one high, one medium and two informational issues were discovered. The key findings are noted below.
At this point, you will want to note down the vulnerability title and summarise the details. Remember this is the key findings, so don't list everything that you found, typically you would list only a few based upon their criticality. That said, if the test has only a few issues, as per the example above, you could note them all down, there are only four. Use your own judgement, remember its a summary for a non technical person to read, so also refrain from using any technical jargon.
In a traditional pen test your going to be time limited, unlike a real life attacker, you don't have a unlimited amount of time to spend on the engagement, so its helpful to have a small paragraph stating this. Also, at this point its worth noting down any issues that might have affected the test, it is not uncommon for the client to forget to send credentials for the application to be tested or possibly not getting full authorisation from a 3rd party such as Azure or AWS (As of 1st March 2019, AWS no longer requires you to apply to for permission test prior to the test starting). If something delays the testing, note it down here, a simple paragraph explaining the issue is enough, keep the tone neutral and non confrontational.
This will be the main body of your report and will contain all of your findings from the assessment. At this point you have the two options, you can present all the vulnerabilities and issues found in order of criticality (highest to lowest) or separate the findings into the relevant phases and then order each phase by criticality. Your company may have a preference on how you present and style the report, in which case you will need to follow this instead.
The background lays the foundation of vulnerability or issue. Think of it as an overview, your setting the scene before going into further details. The content of the background section will depend on whether your manually creating the report, using a template or some other reporting solution. Now, you won't be wanting to write out the same background over and over again as you test, so my suggestion is to write out your background descriptions and create your own knowledge base to which you can refer to at anytime. Not only is this a great learning process for actually writing but it also helps with your knowledge and understanding of vulnerabilities, risk ratings and mitigation's.
Here is a quick example:
----- Unsupported Microsoft Windows Release -----
Every Microsoft Windows product has a lifecycle. The lifecycle begins when a product is released and ends when it's no longer supported. Microsoft publishes the key dates of its products lifecycles so their clients can plan their upgrades.
End of support refers to the date when Microsoft no longer provides automatic fixes, updates, or online technical assistance. Unsupported systems will no longer receive security updates that can help protect the host from harmful viruses, spyware, and other malicious software that can compromise the system’s confidentiality, integrity and availability.
Ok, so we have presented our issue description, next we need to prove it, present our evidence and how we confirmed our findings. Following on from above, here is our example in more detail.
The server was identified as running Microsoft Windows Server 2008. The server version was based on network finger printing and was confirmed using an industry vulnerability and auditing tool.
This Operating System is no longer supported by Microsoft. Service Pack support for Windows Server 2008 ceased in 2013. Furthermore, Microsoft Server 2008 is scheduled to reach the end of its lifecycle on the 14th of January 2020. Security updates will continue up until this date however no further support will be available beyond this. As a result, the Operating System may become vulnerable to any newly discovered exploitation code or techniques.
Unsupported software is also considered as a compliance failure and therefore should be updated. Furthermore, using an unsupported version of the Windows OS will lead to increased operational costs as additional time and investment will be required to keep the server secure.
It always won't just be a couple of paragraphs to explain the issue. Depending on the complexity of the vulnerability you may have to breakdown the issue into reproducible steps. Not only will this help the customer in identifying where the issues resides, but it will also help your colleagues when it comes time to retesting. Having done a few retests, I have found this information invaluable.
Finally, you will also want to screenshot the issue to provide further proof or copy and paste the output of whichever tool that you used to confirm your findings.
Initially when I started testing I found quantifying the risk difficult, however with practise you soon start to become aware of what is considered informational,low,medium,high and critical. In our example above, we would risk this as a "high". We should also include a risk breakdown using a Common Vulnerability Scoring System calculator, either CVSS v2, CVSS v3 or both.
CVSS V2 AV:N/AC:L/Au:N/C:C/I:C/A:P CVSS Score: 9.7
CVSS V3 AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:L CVSS Score: 9.4
That said, there may be times when the CVSS score doesn't really match the risk rating. For example, finding default web content is generally considered as a "informational" risk, however if the default web content reveals a version number of a running service, and this service is vulnerable to a remote attack, then the risk rating increases from being "informational" to "medium".
In this section, we want to give a recommendation on how the client can fix the issue, so for our example we have:
The process of planning and developing a server migration project to a newer Microsoft product or alternative vendor should be considered. Based upon the current lifecycle development of the Microsoft suite of server products, a comparable option to investigate would be Windows Server 2016 Datacenter, currently supported by Microsoft until January 2027.
Providing references, when appropriate, can help the client to quickly fix an issue. If there is a direct reference to fixing the issue on a vendors website then this would be my first choice. You should also look to use trustworthy references, the issues that you are detailing aren't new and will have been found by other researchers and security vendors, therefore you should be able to find a more meaningful reference rather than relying on casual fixes posted on various forums, wikipedia and stackoverflow, I shy away from using such references. The OWASP website contains plenty of references which I encourage you to investigate.
Quite simply, tell the client where you found the issue:
The affected system is: 10.x.x.x
Simply list the affected systems, no need for any further explanation. If you are reporting on a web application vulnerability, use the URL of the affected item.
Reporting Tips Breakdown
What was recommended to me and what I now advocate is that you should try to record your results as you test. This doesn't have to be a full paragraphs, but rather something that makes sense to you and something that you can expand. By doing the reporting in this manner, I have found that I feel less pressure on the days that have been scheduled for reporting. The other advantage is that it allows you more time to test other areas of the infrastructure or application.
For example, I like to note the IP address, port and the issue found and the tool used to confirm the finding.:
Host: 10.x.x.x Port 443 SSL v3 Present - SSLScan
This section will contain the raw output from the tools that you have used throughout the assessment, Nmap results, SSLScan results, WHOIS information etc.
Reporting is often seen as a horrible by-product of pen testing, yet it is the key deliverable that your client is paying for, so why not go all out and try to create something to be proud of, a well written and structured report can be a thing of beauty.
I hope this article has been helpful, any comments or suggestions feel free to hit me up on Twitter - @ignitionlab.
Photo by Glenn Carstens-Peters on Unsplash