Burp Suite contains the accompanying key segments: An intercepting proxy, which allows you to investigate and alter traffic between your program and the objective application. An application-aware spider, for slithering substance and usefulness. A propelled web application scanner, for computerizing the recognition of various kinds of helplessness. After discussing Burp Suite setup, and the Proxy and Target tools in the last blog post, this post discussed the Spider, Repeater and Intruder tools. Spider is used to more thoroughly map out a site, Repeater is used for manually tampering and replaying requests, and Intruder is used to automate a large number of requests with parameterized values.
Burp Suite Package Description. Burp Suite is an integrated platform for performing security testing of web applications. Its various tools work seamlessly together to support the entire testing process, from initial mapping and analysis of an application’s attack surface, through to. Burp Suite can spider a website very quickly and it usually finds most of the web pages on a website. Once it has spidered a website, it allows you to not attack any page it found during the scan. This is very useful when there are certain parts of a website you do not want to attack. Web Hacking with Burp Suite (Part 2: Scope, Spider, and Scan) So you made it through Part 1? Congrats on your success - that was the most boring part. Now it's time to hit the web and hack some apps. But where do we start? Well, I don't recommend hitting the open information highway and testing whatever you see. That might cause some problems.
This tutorial is yet another introduction to Burp Suite. It explains how to install and use Burp Suite, fundamental tool used by bug hunters (but not only) on daily basis to test web applications.
“Burp Suite created by PortSwigger Web Security is a Java based software platform of tools for performing security testing of web applications. The suite of products can be used to combine automated and manual testing techniques and consists of a number of different tools, such as a proxy server, a web spider, scanner, intruder, repeater, sequencer, decoder, collaborator and extender.”
In order to pass HTTP traffic to Burp from our browser, we will use Foxyproxy an Addon for Firefox and Chrome. Requests made in the browser can be viewed, edited and analysed in Burp to find web application vulnerabilities. Later sections of this tutorial will show how to use Burp Suite for specific tasks so that you can get accustomed to using it.
The following sections are provided:
Burp Suite Free Download
Before you start reading this tutorial there are some concepts you should probably know:
- What is an IP address? How are IP addresses assigned? What is a routing protocol? What is TCP/UDP?
If you don’t know much about this you can have a read at: http://www.cs.rpi.edu/~kotfid/ne1/CCNA_chapter2.pdf
This 300 pages book can also help you in your journey: http://write.flossmanuals.net/data/messages/afzalkhalil/ccna_studyguide.pdf
- What is the HTTP protocol and how does it work?
There is a very good explanation about HTTP in the “Web Application Hacker’s Handbook” which you can buy on Amazon or check other free sources such as https://www.tutorialspoint.com/security_testing/http_protocol_basics.htm.
What to download to get started (go back up!)
Download the following tools:
- The latest version of JAVA (see below);
- Burp suite free edition: here;
- Foxyproxy addon for Firefox or Chrome.
For Ubuntu open the terminal and run the following commands:
sudo add-apt-repository ppa:webupd8team/java # to add the Oracle’s repository
sudo apt-get update
sudo apt-get install oracle-java8-installer
Open this page in the browser http://www.oracle.com/technetwork/java/javase/downloads/index.html, download the Java JDK and install it.
Setting up Foxyproxy (go back up!)
Install Foxyproxy for Firefox or Chrome and restart the browser.
Download links are:
- https://addons.mozilla.org/en-GB/firefox/addon/foxyproxy-standard/ (Firefox)
- https://chrome.google.com/webstore/detail/foxyproxy-standard/gcknhkkoolaabfmlnjonogaaifnjlfnp?hl=en (Chrome)
Click on Foxyproxy’s icon and click “Options”:
Click “Add new proxy”. In the “Proxy details” section → “Manual Proxy Configuration” insert the following values for Server and Port:
- Server: 127.0.0.1
- Port: 8080
In the “General” section, give the proxy a name and select a colour. Then save.
Now start the proxy you just created by right clicking on the Foxyproxy icon and selecting the newly created proxy.
Starting up Burp Suite (go back up!)
You have downloaded Burp Suite for either Windows or Linux. On windows you can double-click on Burp executable to start it. On Linux you can do the same or download the plain jar file, open a terminal in the folder where you downloaded Burp and run the following command:
java -jar burpsuite_community_v1.7.30.jar
Note. The jar file might be called differently.
Start Burp Suite with default settings.
You can see several tabs: Target, Proxy, Spider, Scanner, Intruder, Repeater, Sequencer, Decoder, Comparer, etc.
This tutorial will explain how to use Burp’s tools in the order you would probably use them at the start of a web application security assessment or bug bounty.
The proxy is used to intercept requests from your browser. These can be modified on-the-fly or can be viewed together with their responses in the 'HTTP history' tab.
Click “Proxy” → “Intercept” → “Intercept On” to stop intercepting requests.
If you open a page in the browser with “intercept is on”, Burp will display the request sent from your browser and until you press “forward” or “intercept is on”, it won’t submit the request to the web application’s server and receive a response. What you will see in the browser is a page which keeps on waiting for a response. That’s because Burp hasn’t sent the request yet.
In few more words:
Click “Intercept is on” to turn off interception. This will grab all the requests sent from the browser through Burp’s proxy. Burp will send them to the right destination only if you stop intercepting or if you press the “forward” button which will forward the request to the web server. It is good to have “intercept is on” only when you know that you want to intercept a specific request to change it on-the-fly. The requests will be stored in “Proxy” → “HTTP history” for later user, even if you don’t have “intercept is on”.
If intercept is on and you don’t really want to send the request forward, click “drop”. This will not send the request to the destination. Probably you will see an error in your browser showing that the request was not submitted. Another reason why you would like to you use “drop” is when you want to see how a request is made but not necessarily send it to the web server. For example you have clicked on a “submit” button on the target site and the request has been submitted and intercepted. Now click on “action” → “Send to repeater” (or CTRL+r) and then “drop”. This way the request will be available immediately in repeater for you to modify without prior submitting the original request. You might want to do this in case every request of this type generates a lot of traffic or creates a new entry in a database.
Proxy Options (go back up!)
Click “Proxy” → “Options” to see your proxy’s settings.
As you can see the default port used by Burp for its proxy is port 8080. That is the same you chose for Foxy Proxy. If you want to choose a different port or have multiple proxies you can. Just remember to create the same configuration both in Burp and Foxy Proxy. For example you might want to have port 8080 for Foxy Proxy on Firefox and port 8089 for Foxy Proxy on Chrome. You can use the same ports on both browsers if you want.
If you have the following Foxy Proxy configuration: IP: 127.0.0.1 Port: 1337, then you must have the same configuration in Burp Proxy, IP: 127.0.0.1 Port: 1337. This is because the communication goes as follow:
- The user is browsing the target site;
- Foxy Proxy and Burp are configured with same IP and Port as explained above;
- Foxy Proxy is on, Burp Proxy is on;
- Foxy Proxy takes every single request the user makes and sends it the proxy’s IP and port ( in this case Burp’s proxy);
- Burp intercepts the request and stores it in the HTTP History;
- At the same time Burp forwards the request to the destination (the web application server) and waits for a reply
- When the web server sends back a response page, Burp forwards this response back to the Browser.
Foxy Proxy makes sure all the requests are sent to Burp’s Proxy.
Burp Suite's SSL Certificate (go back up!)
Installing Burp's certificate in your browser will help you intercepting traffic sent by sites using SSL/HTTPS. The browser will not complain that your connection is not secure if you install Burp’s certificate as a trusted CA authority. You are telling the browser that Burp with its certificate is OK to encrypt/decrypt HTTPS traffic.
Try to open any pages without installing the certificate and you will see that the browser complains that the connection is not secure (e.g. https://www.facebook.com).
“By default, when you browse an HTTPS website via Burp, the Proxy generates an SSL certificate for each host, signed by its own Certificate Authority (CA) certificate. This CA certificate is generated the first time Burp is run, and stored locally. To use Burp Proxy most effectively with HTTPS websites, you will need to install Burp's CA certificate as a trusted root in your browser.
Note: If you install a trusted root certificate in your browser, then an attacker who has the private key for that certificate may be able to man-in-the-middle your SSL connections without obvious detection, even when you are not using an intercepting proxy. To protect against this, Burp generates a unique CA certificate for each installation, and the private key for this certificate is stored on your computer, in a user-specific location. If untrusted people can read local data on your computer, you may not wish to install Burp's CA certificate.”
Go back to your browser and type:
This will open Burp proxy’s page on your local machine (The port could be different if you configured Burp to use a different one) where you can download Burp’s SSL certificate.
Download the certificate and install it in Firefox:
Click “Preferences”→ “Advanced”→ “Certificates”→ “View Certificates”→ “Authorities”→ “Import”→ Select the file you just downloaded→ Tick all the options and click “OK”.Now the certificate is installed and you can browse sites using HTTPS without problems.
If you want to install the certificate in Chrome:
Click Chrome “Settings”→ “Show Advanced Settings”→ “HTTPS/SSL”→ “Manage Certificats…”→ “Authorities”→ “Import”→ Select the file you just downloaded→ Tick all the options and click “→ OK”
If you visit any sites running on HTTPS, you shouldn't see any warnings.
Proxy History (go back up!)
Open a page in the browser with Foxyproxy on (e.g. https://www.indeed.co.uk)
Open “Proxy” → “HTTP History” to see the HTTP requests sent from the browser to Burp. In this tab you can view information related to the requests.
You can see the hostname (the site), the HTTP method used to send the request, the URL of the page/request, if parameters were sent with the request, if it has been edited by you, the HTTP status of the request, the length of the response (the size of page), the MIME type of the page (what type of page are we viewing? An HTML page? A Script? An image?), the extension of the page and finally the title of the page.
Order the list by the last request received by clicking the hashtag symbol which is the title of the first column to the left.
Click on any line in 'HTTP history' to see the request sent by the browser and the response sent by the application.
Click “Filter” to show the filter options for the “HTTP History”. Enable all the MIME types. This way you will see pictures and binaries in the HTTP History (this can be very useful).
Click anywhere in Burp’s main window to make the filter disappear.
Why is the “HTTP history” important during a test?
Target Sitemap and Scope (go back up!)
Click “Target” → “Site Map”. You will see several websites included in the site map. These are all the sites that you have visited since you started opening pages in the browser. Of course during a web application assessment, you don’t want to see all of them because they are not part of your scope and therefore you should not test them (legally speaking, you are not authorised to test these sites.)
Select 'Use Advanced scope control'.
Let's add a target to our target scope. A way to do so is to select the website we want to test in the Proxy's HTTP history tab. Click 'Proxy' → 'HTTP history'.
Right click on one of the lines showing the site you opened in the browser. This will open a small menu, very useful to Burp’s users. This menu and some of its functionalities will be explained in this tutorial.
Click “Add to scope” in the small menu. Click “No” in the message displayed by Burp. You just added the site to your scope.
Go back to the Scope tab under Target.
You can notice few things from the screenshot above:
- The site in scope is enabled. You can disable it and it will disappear from the “Site map” tab. Useful in some cases where you want to momentarily disable a target.
- Protocol. It says HTTPS. This means that in the “Site Map” tab, you will see HTTPS requests only. Several websites use a mix between HTTP and HTTPS requests. It is best to intercept both types.
- The host is www.indeed.co.uk but we want to target all subdomains of indeed.co.uk (for example test.indeed.co.uk, admin.indeed.co.uk, etc.). You want to add all these sites to the scope.
- Port. The standard port for HTTPS is 443. HTTP traffic generally travels on port 80. Note. You can have sites running on different ports but generally HTTP is on port 80 and HTTPS on port 443. When you open www.ciaociao.com the browser will try to open the site on port 80 and HTTP. If you open https://www.ciaociao.com the browser knows you want port 443. As said for #2 you should intercept requests on all ports. It is better in case the sites in scope use both HTTP and HTTPS
- File. As you can see the file is “favicon.ico” and that’s the only thing you will see in the site map. Of course this is not what you want. You want to see all the files and folders on the site. In some cases you will remove some folders of a site from the scope. For example when you visit a site and a specific folder is full of PDF files which might not be relevant to your test.
Select the target's line and click “Edit”. Modify the options as follows to include all the protocols, all the ports and all the subdomains for the site in scope then press OK.
Go back to the “Site Map” tab and click “Filter”. Select all MIME types (as you did for the HTTP history) and select “Show only in-scope items”.
Click in the main window to hide the filter tab. Click on the little arrow situated on the left of the targeted website to view the pages that you currently opened/requested.
The “Site Map” section is very important. Here you will see quite clearly the structure of the site.
As you can see, there are folders and files just like in a file manager. The yellow icons are used for folders and can be opened to show the files stored in that folder. The icons with a small engine wheel show URLs sent with parameters. For example:
If you right click on any of the folders or files you will open a small menu. Click “Remove from scope”, if you want to remove any folders or files from your list. You can see the exclusions in the “Scope” tab.
Another important section of the “Site Map” tab is the “Issues” section which is available only in Burp’s Pro version. Burp Suite analyses passively each requests and responses and if it finds any vulnerabilities, it will be displayed in this section. “Passive scanner” will not send any requests to find the vulnerability but it will just analyse your traffic. Burp suite pro has a feature called “Active scan” which actively sends requests to find vulnerabilities. Active scan is not available for the free version of Burp Suite.
Spider (go back up!)
The spider is another important Burp’s feature. The spider lets you find the components of a site.
How does a spider work?
It is quite simple. When you open a page (e.g. https://www.indeed.co.uk) you will be able to click on buttons and links to reach other parts of the site. Several URLs (or part of URLs) can be seen in the response provided by the web application’s server. A spider takes all the URLs and requests them by sending an HTTP request to retrieve that file/page/folder. As soon as the spider receives a response from the web application, it reads the content of the response to see if there are more URLs to be opened and so on, until all the possible links have been opened. This tells you a few things:
- If the site is massive, you will likely be sending a lot of requests, even thousands. Be very careful when you use the spider feature. Some sites might block your IP because you are about to send 40.000 requests. Spider only sections that you need spidering or make sure to check the site is not huge (like Ebay, Amazon, Facebook, etc.) and can be spidered.
- Spidering can make waste precious time if you spider areas of the site which are not relevant. Make sure you focus your efforts (and bandwidth) if you want to be first reporting a bug.
- If the spider doesn’t find a direct URL to e.g. http://indeed.co.uk/random/admin1/login.php, it won’t add it to the site map and you will probably miss a juicy page. There are other methods to find hidden components later explained.
Spider Options (go back up!)
Before using Spider there are a couple of settings you might want to change in “Spider” → “Options”.
In “Forms Submission” Click “Don’t Submit Forms”. A Form is that part of a website where you have input fields and a button (not always) to submit the details you just entered. A form could be a login prompt (username and password with “login” button), it could be a search component, a contact form, etc.
Why is better not to submit forms when spidering?
Imagine a site where you have several pages that when opened have a contact form in it. Burp spider will input some semi-random data in the input fields and submit the form, then read its response. What happens to the data you submitted? It is possible that the data you submitted will be sent via email to one or more of your client’s employees (it’s a contact form so you should expect someone to read what you sent). This can be very annoying in the case you submitted hundreds of emails. Clients don’t like this. When you find a form, try to submit it manually to see the resulting page. In cases where you are sure there is no danger you can activate spidering on forms again.
Set “Don’t submit Login forms” in “Application Login”. Similar to the paragraph above, you don’t want to submit login forms in case the client doesn’t want you to try a bruteforce attack against login pages. You can do this manually, it won’t take long.
The spider Engine has 2 useful options.
- Number of threads. How many requests should the spider work on at the same time? It is important you set this option correctly. Let’s say a website is not very responsive (it’s old or just badly designed) and you start sending many requests per second. This could cause a Denial of Service (it happens). Some websites are protected by Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS) or Web Application Firewall (WAF). If any of these systems notices you are sending several requests at the same time, they might block your IP from reaching the site as a defence mechanism. Lower the number of threads if required.
- Throttle between requests. Same as above. Some IDS, IPS or WAF might notice your traffic and decide to block you. Send requests with throttle (1000 milliseconds = 1 second). This way you won’t get your IP blocked but spidering will take a bit more.
Spidering (go back up!)
How do you spider a site?
Go to “Target” → “Site Map” then right-click on the site you want to spider and select “Spider this host”. This will spider the entire site. If you prefer to spider a specific folder, right-click the folder of the site in the “Site Map” area and select “Spider this branch”.
When you activated spidering against a site/folder, go back to “Spider” → “Control” to see how many requests have been queued (How many requests the spider has to send). This will tell you if you found an area of a site with hundreds of requests to be made.
This is what happens if you try to spider indeed.co.uk:
This is why it is important to select only specific areas to be spidered in the case you have to test a huge website and don’t want to spend your spidering resources requesting very similar pages (e.g. if the site is posting job adds, you can assume that all job adds will have a similar structure. If one of these pages is vulnerable, all the others will be as well. You don’t need to see/request all of them to know).
Repeater as the name suggests, lets you repeat/replay a request. It shows you the request in the left pane. You press Go and Burp sends the request. You can go back and forth to see all the requests and responses you have sent to repeater (these are not added to the sitemap and HTTP history tabs unless you add them yourself). The response is shown in the right pane.
This is useful because:
- Otherwise you would probably use the browser. Copy the URL, modify it, paste it back into the search bar, press Enter and see the result. (if you feel brave, you could look into the 'curl' command in Linux)
- You can’t modify HTTP POST requests in the browser’s search bar unless you install a plugin/addon.
- It is much easier to search for a specific keyword into the response of the request.
- It is much easier to select the value of a parameter and replace with the value you want.
In order to send a request to repeater go to “Target” → “Site Map” tab or into the “Proxy” → “HTTP History” tab and select the request you want to repeat. Right-click and “Send to Repeater” or CTRL+R.
Indeed.com lets you find jobs and people to hire. Let’s try to find someone with “penetration testing” skills.
Try opening the following request into the browser, find it in “HTTP History” and send it to repeater:
Click “Go” and see the resulting response.
From the screenshot above you can notice few things:
- The request is on the left; the response is on the right;
- In blue you have parameters (including cookies) and in red you have their values;
- The request is an HTTP GET request
- The resulting page is 27,165 bytes (see bottom right corner)
- It took 980 milliseconds to load the response (see bottom right corner)
- The response Content-Type is text/html (a classic HTML page)
- The word “penetration” is used in the request (parameter “q”) and found 40 times in the response.
Spider In Burp Suite 2
Try to change the value of the parameters of the request to see what happens to the response.Note:
- You are not looking for a vulnerability here, you just want to see how easy it is to use repeater.
- If you click “Params” in the “Request” pane you will see all the cookies and parameters of the request. This can be helpful when a request has a lot of parameters, you want to see them in order and modify them quickly.
Intruder (go back up!)
“Burp Intruder is a powerful tool for automating customized attacks against web applications. It can be used to automate all kinds of tasks that may arise during your testing.”
Few of the automated tasks you might want to achieve are:
- Brute force of login pages. You have a login form, a user’s list, a password’s list and you want to see whether any of the accounts are using a weak password.
- Brute force of numeric IDs. For example you have the following request http://www.somesites.com/user/private/documents/?id=10249 which allows you to download a document you own (with id 10249). You found out that this component is vulnerable to IDOR (insecure direct object reference) which allows you to see the documents of all the users. Now you want to download all these documents for a quick proof of concept. Intruder will do the job.
- Testing all parameters for a specific list of payloads. You have a list of payloads which could suggest the site is vulnerable to X or Y. You want to test all the parameters and cookies with your list.
- Discover hidden content which is similar to what you would do with tools such as dirb and dirsearch.
Intruder example(go back up!)
The following example will show you how to use intruder to brute force numeric IDs.
The following request (http://example.com/groupmembers.asp?groupid=1300) allows unauthenticated users to view the member’s directory of the group with id equal to 1300. Users’ email addresses are disclosed in the page (yes, users log in with their email addresses).
If you tried to open the same URL and change the groupid to 1299 you would be redirected to the login page and no emails would be shown to you. This is probably because some groups don’t exist anymore or their members are private. This is why you want to use intruder to catch the pages with email addresses of regular users.
Why would you want to see the email addresses of all the users subscribed to the site? To later perform a brute force against a login page!
When you access the above page, the URL will be added to the Proxy HTTP History. Right click on the line of the request and left-click “Send to Intruder”.
Click the “Intruder” tab and you will be presented with the “Target” tab. This shows you your target host and port. There is nothing to be changed here. Click “Positions” and you will see something like this:
Two points to notice from the screenshot above:
- Attack type: Sniper. Intruder has 4 attack types that allow you to play payloads:
- “Sniper” lets you choose one list of payloads which will be sent for each individual selected parameter. So if you have a payload list made up by “hello”, “my”, “name”, “is”, “Burp”, in this case Sniper will replace the value of the first parameter with “hello”, then with “my” and so on, then it will do the same with the “groupid” parameter.
- “Battering Ram” lets you choose a payload which will be used with all the selected parameters at the same time. So if you are using the same list above, “hello” will be sent for both the first obfuscated parameter and the “groupid” parameter, then “my” and so on.
- “Pitch fork” let you choose a payload list for each parameter then the first value of the first list goes into the first parameter and the first value of the second list goes into the second parameter; then the second value of the first list goes into the value of the first parameter and the second value of second list goes into the second parameter and so on.
- “Cluster bomb” let you choose a payload list for each parameter. Then the value of the first parameter will be tested with all the values of the second so that all payloads of parameter 1 will be tested with all the payloads of parameter 2. Examples will be provided in this or later tutorials.
- The value of the parameters is between “§” signs (and highlighted in orange). This shows that you selected 2 parameters and will apply the payloads you choose to both. Burp intruder selects all parameters and cookies by default.
Leave Sniper selected and click “Clear §” on the right side of the screen. This will deselect the parameters. What we want from this exercise is to brute force only 50 group ids (we don’t want to send too many requests at this moment).
Select the number of the “groupid” parameter and change it to 1300. K2 joomla. Now select the two zeros of 1300 and click “Add §”:
As you can imagine, whatever payload you choose will go between the “§” signs to substitute the “00”. We want to test IDs 1300 up to 1350.
Click “Payloads” tab.
“Payload Sets” allows you to select the “Payload set” which is the injection point you selected (in this case it is only one) and the payload type. Burp intruder has several payload types. The most used are probably “simple list” (you can choose a file with payloads one per line) and “Numbers” (you can choose a way to brute force numeric IDs).
“Payload options” allows you to change the way you use a payload type. In the case of “simple list”, it lets you choose a file or one of Burp’s default lists.
“Payload processing” allows you to modify the payload on the fly. For example you can add a prefix or suffix to the payload.
“Payload encoding” allows you to choose if you want to URL encode specific characters. Some web applications require specific characters to be encoded or the request will fail.
Select “Payload type” → “Numbers” and input the following options:
Here we are telling Burp to use numbers from 00 to 50. We start from 00 because the part of the parameter we selected is made up by 2 zeros (Remember the “13§00§”?). We also want a minimum and a maximum number of integers equal to 2 meaning Burp should send 00, 01, 02 and not just 0, 1, 2, etc.
We will cover Burp Intruder options in the next tutorials.
Click Intruder in the top bar and “Start Attack”:
Burp Suite Program
Burp will start sending the payloads as you specified.
From the screenshot above you can notice few points:
- The payload you used is in the second column
- The HTTP response code in the third column is different depending on the request (some “200 - OK” and some “302 - Redirection”).
- The length of the response in the sixth column is different depending on the request.
Point #2 and #3 clearly tell you that some requests give you back the page of a group (the ones with HTTP code 200 and a greater response length) and others redirects you to the login page (HTTP code 302 and response length 392).
During a real-world attack scenario you would probably enumerate all the group IDs, then find all the email addresses for the users and then if allowed by the engagement rules, perform a brute force attack.
Decoder lets you decode/encode strings into different formats such as URL, Base64, HTML encoding. As said above web application will accept/need encoded characters in order to understand specific requests.
Comparer lets you compare requests and responses. This can be useful when you submitted 2 requests and changed the value of a parameter. The resulting response differs from the first one by few bytes. You want to know where in the page something has changed.
Burp has more features that would be good to discuss. This tutorial won’t go into more details but later tutorials will show other ways to exploit Burp’s potential.
In my last post I covered setup for Burp Suite, as well as the Proxy and Target tabs.
This blog post will cover the Spider, Intruder and Repeater tools, which start to show the usefulness and power of Burp Suite. Since everything is more fun with examples, I’ll be using practice hacking sites to demo some of these features. : )
If you don’t have Burp Suite set up yet, check out this blog post first.
First up is the Spider tool, which is a web crawler. Burp’s website states:
Burp’s cutting-edge web application crawler accurately maps content and functionality, automatically handling sessions, state changes, volatile content, and application logins.
In other words, it programmatically crawls a website(s) for all links and adds them to the Site Map view in the Target tab. If you worked through the last post and its examples, then you have already (passively) used the Spider tool.
Why is this useful? Having a complete site map helps you understand the layout of a website and makes you aware of all the different areas where vulnerabilities might exist (for example, seeing the gear icon on a page means that data can be / has been submitted). Doing that by browsing through the website is time-consuming, especially if you have a very complex website.
The Spider tool does all of that for you by recursively finding and requesting all links on a given website.
Make sure you set your scope before you run the Spider tool!
We covered scope in the last blog post, but it’s a way of limiting what websites are shown to you within Burp, and what websites are used by other tools (which sites do you want to be sending requests to?)
Burp Suite Modes
In this example, I’ll be using XSS Game first. First, I turn FoxyProxy on in my browser, and make sure that the settings in the Proxy > Options tab match my FoxyProxy options.
Next, I go to the Target > Scope tab to set my scope. I add a new scope and type “xss-game”. If you do not set a scope when spidering, it will crawl things outside of your intended target. Depending on what those sites are, that might be bad. 🙃
Spider In Burp Suite Youtube
If you go to Spider > Control, you can see that the scope defaults to “Use suite scope”, which is the scope we just defined. You can also set a custom scope if needed, which will function separate from the scope applied to other tools.
To start spidering, you have a few different options. As we saw in the last blog post, you can right-click a request from numerous places (Proxy > HTTP History, Proxy > Intercept, Target > Site Map, etc.) and send the request to other tools.
In the Target > Site Map view, you can see that I’ve already visited one site from XSS Game (I visited the splash page).
Right-click this and select “Spider this branch”. In other views, you can right-click a request and say “Spider from here”.
When I do this, I can see that the Spider tab has lit up orange.
If I want to see how many requests are being made, or if I need to stop the tool for some reason (maybe things are getting recursively crazy), go to Spider > Control.
If the “Spider is running” button is grey/depressed, that means it’s currently running. You can press the button to stop it, and then clear any upcoming queues if need be.
Here are the results:
In this case, the results aren’t that impressive. We probably could have found most of those by just browsing. But, hopefully it’s clear how this would be useful for much larger websites.
Form Submissions and Other Options
I also ran the Spider tool on a local copy of OWASP’s WebGoat tool (which meant that I had to add
localhost to my scope before Spidering). WebGoat is an intentionally vulnerable web app used to teach various attacks, and includes two different login accounts.
When I started running the Spider tool, I saw this pop-up in Burp:
I already knew the login (it was provided), so I typed “guest” and “guest” into the username and password fields. But then the form submission pop-up appeared again. If this was a bigger application, then this would get very annoying very quickly.
If we got to the Spider > Options tab, and scroll down, we see that there’s automated responses that we can choose for a login form:
It defaults to “prompt for guidance” but we could change the settings with our known credentials.
If you scroll up or down on the Spider > Options tab, you’ll see that there are automated responses for other forms as well. Be sure to look this over and either modify the field values, turn automated form submission off, etc.
The Options tab is also where you can turn off “passive spidering” (where Burp adds information to your Site Map as you browse). Max link depth and parameterized requests per URL can also be configured on this page.
The Spider tool is a web crawler that recursively requests every link it finds, and adds it to the Site Map. Before you use it, it is important to set the scope (Target tab) and also define the Spider’s behavior when it encounters logins or other forms.
The Proxy tool lets you intercept requests, and the Site Map and Spider tools help show the breadth and depth of a target. But finding malicious payloads (or CTF flags) happens at the single-request level.
The Repeater tool is a manual tampering tool that lets you replay individual requests and modify them. This is often called “manual” testing.
I’ll be showing the Repeater tool on the XSS Game website (I’m doing this in Firefox; Chrome has a XSS blocking feature).
In the Proxy > HTTP History or Target > Site Map view, right-click on a single request and select “Send to Repeater”. The Repeater tab should light up orange. Here, I’m right-clicking on a
/level1 request for XSS Game where I’ve sent a query (“hi”).
This will show up in the Repeater view as a numbered tab (which you can rename).
If I click “Go” it will send the request again, and I can see that the query string of “hi” (once again) did not allow me to move to the next level of XSS Game.
Let’s try this again and swap out “hi” for “alert(‘hi’)”. I can do this by highlighting “hi” and typing my new payload.
Then, I can click Go. I see in the output that my script tags are still intact, which means that my XSS attack might work. From here, I have two options. I can either:
- Copy/paste my payload into the website and do it manually, or
- Use Burp to automate a browser request.
I want to do the second option, so I right-click anywhere in the Response area, and say “Request in browser” and select “original session”.
This will pop-up a window with a temporary link. If you copy/paste this into your browser, then you will be redirected to the website with the payload you created in Burp.
Once again, yes, this is a simple example, but it simplifies a lot of the trial-and-error that might occur while testing out a page.
Better yet, you also get forward and back history buttons, so if you want to go back to a previous request you made, it has already been saved in your history, and it’s easily accessible.
Additionally, the response payloads will likely be much bigger in a “real” website. You can use the buttons at the bottom of the Response view to search for terms (i.e. “Success!') matching strings or regexes.
Lastly, the responses can be viewed in a variety of ways. You can see the raw response, just the headers, the HTML, or the rendered page.
Repeater is a manual tampering tool that lets you copy requests from other tools (Proxy, Target, etc.) and modify them before sending them again to the target. The Repeater makes it easy to modify the payload, and also provides links so that you can quickly repeat the attack in the browser.
The last tool covered in this post is the Intruder tool. Imagine if we wanted to login to an application but we didn’t know the username or password. We could copy a login request over to the Repeater tool, and then manually select the username and password and replace it each time with some options from a list.
Of course, we’d have to do this hundreds or even thousands of times. If we want to automate a process like this, where we have a changing parameter and a known set of values that we want to try, then it’s time to use the Intruder tool.
I’m going to use OWASP’s WebGoat site for this example, since it has a login form. I have this running locally. I go to the login form on the site, and try a username/password combination (I know the correct combination but for this example, let’s pretend that I don’t know).
In the Proxy > HTTP History tab, I find the request that corresponds to my guess.
I right-click on the request view and select “Send to Intruder”. I should see the Intruder tab light up orange, denoting that there’s new activity in that tool.
If we click over to the Intruder tab, we see this. **It’s a very good idea to double-check these values each time, as the Intruder tool is going to send a LOT of requests to your target. ** Make sure it’s correct so you’re not sending these requests to someone else!
Next, click on the Positions tab. Burp Suite has helpfully identified what it thinks are values that we want to parameterize. In this case, the session ID, the username and the password.
If we want to set this ourselves, we can click “Clear”. Then, highlight the value you want to parameterize and click Add. This will add squigglies around the word. Parameterizing values means that we can programmatically change the value in our requests.
In this example, I’ve parameterized the username and password values. Then, I selected “Clusterbomb” as the attack type. This means that it will try every username and password combination that I give it (factorial options).
How To Use Burp Suite
Next, click the Payload tab. Since we have two payloads (username and password), we will have to set each one individually. You can select one at a time from the first section:
We’ll use “Simple list” as the payload type for this, but there are many other options, like “numbers” which could be used to find IDs or change a value in a longer string of characters.
If you have the Pro version, then you can use pre-defined lists in Burp. If you are using the free version, you can either load in a list (i.e. “Rock you” for passwords, etc.) or create your own list. For this example, I will make my own list of 4 possible usernames by typing them in and clicking add. Since Payload Set “1” was selected in the Payload Sets section, this applies to my first parameter, which is username.
Next, I have to set the Payload Set to “2” and make some possible passwords.
Now I can see that I’ve got a request count of 12, which makes sense. I’ve got 4 usernames and 3 passwords. If I try every combination (since I set my attack type to “Clusterbomb”), then I will have 12 requests.
Next, I click “Start Attack” in the Payload Sets options. If you have the free version, your attacks will be throttled, so big lists will take a long time. 12 requests should go pretty quickly, though.
I’ll see a pop-up window that lists all the attacks. In “real” attacks, this would be much longer, so I can use the Grep – Match tool in Intruder > Options, or just sort by HTTP status code or response length to find the interesting responses.
In this case, it’s obvious since we have such a short list. The last combination, which is “guest” / “guest”, returns a much longer response than the other attempts. This is the correct set of credentials (the added response length is from the login cookie we received).
Burp Suite Crawl
As with the other tools, the Options tab is worth checking out. You can limit the number of threads/retries/etc. You can also use the Grep sections to sort through your attack results easier.
The Intruder tool automates requests when we have positions whose values we want to swap out, and we have a set of known values for those positions. We can configure the attack with user-, list- or Burp-defined values for each position, and use grep and other tools to sort through the results.
After discussing Burp Suite setup, and the Proxy and Target tools in the last blog post, this post discussed the Spider, Repeater and Intruder tools. Spider is used to more thoroughly map out a site, Repeater is used for manually tampering and replaying requests, and Intruder is used to automate a large number of requests with parameterized values.