Domain Analyzer – Tool For Analyzing the Security of a Domain

SHARE:

Domain Analyzer – Tool For Analyzing the Security of a Domain
Domain Analyzer is a security analysis tool which automatically discovers and reports information about the given domain. Its main purpose is to analyze domains in an unattended way.
It takes a domain name and finds information about it, such as DNS servers, mail servers, IP addresses, mails on Google, SPF information, etc. After all the information is stored and organized it scans the ports of every IP found using nmap and perform several other security checks. After the ports are found, it uses the tool crawler.py to spider the complete web page of all the web ports found. This tool has the option to download files and find open folders.

Features:

  • Creates a directory with all the information, including nmap output files.
  • Uses colors to remark important information on the console.
  • Detects some security problems like hostname problems, unusual port numbers and zone transfers.
  • Heavily tested and it is very robust against DNS configuration problems.
  • Uses nmap for active host detection, port scanning and version information (including nmap scripts).
  • Searches for SPF records information to find new hostnames or IP addresses.
  • Searches for reverse DNS names and compare them to the hostname.
  • Prints out the country of every IP address.
  • Creates a PDF file with results.
  • Automatically detects and analyze sub-domains!
  • Searches for domains emails.
  • Checks the 192 most common hostnames in the DNS servers.
  • Checks for Zone Transfer on every DNS server.
  • Finds the reverse names of the /24 network range of every IP address.
  • Finds active host using nmap complete set of techniques.
  • Scan ports using nmap.
  • Searches for host and port information using nmap.
  • Automatically detects web servers used.
  • Crawls every web server page using our Web Crawler Security Tool.
  • Filters out hostnames based on their name.
  • Pseudo-randomly searches N domains in google and automatically analyze them!
  • Uses CTRL-C to stop current analysis stage and continue working.
  • It can read an external file with domain names and try to find them on the domain.
Domain Analyzer – Tool For Analyzing the Security of a Domain

Usage:

usage: -d <domain> <options>
options:
 -h, --help                               Show this help message and exit.
 -V, --version                            Output version information and exit.
 -D, --debug                              Debug.
 -d, --domain                             Domain to analyze.
 -L <list>, --common-hosts-list <list>    Relative path to txt file containing common
                                          hostnames. One name per line.
 -j, --not-common-hosts-names             Do not check common host names. Quicker but
                                          you will lose hosts.
 -t, --not-zone-transfer                  Do not attempt to transfer the zone.
 -n, --not-net-block                      Do not attempt to -sL each IP netblock.
 -o, --store-output                       Store everything in a directory named as the 
                                          domain. Nmap output files and the summary are 
                                          stored inside.
 -a, --not-scan-or-active                 Do not use nmap to scan ports nor to search
                                          for active hosts.
 -p, --not-store-nmap                     Do not store any nmap output files in the 
                                         directory <output-directory>/nmap.
 -e, --zenmap                             Move xml nmap files to a directory and open 
                                          zenmap with the topology of the whole group. 
                                          Your user should have access to the DISPLAY 
                                          variable.
 -g, --not-goog-mail                      Do not use goog-mail.py (embebed) to look 
                                          for emails for each domain
 -s, --not-subdomains                     Do not analyze sub-domains recursively. 
                                          You will lose subdomain internal information.
 -f, --create-pdf                         Create a pdf file with all the information.
 -l, --world-domination                   Scan every gov,mil,org and net domains of 
                                          every country on the world. Interesting if 
                                          you don't use -s
 -r, --robin-hood                         Send the pdf report to every email found 
                                          using domains the MX servers found. 
                                          Good girl.
 -w, --not-webcrawl                       Do not web crawl every web site 
                                          (in every port) we found looking for 
                                          public web mis-configurations 
                                          (Directory listing, etc.).
 -m, --max-amount-to-crawl                If you crawl, do it up to this amount 
                                          of links for each web site. Defaults to 50.
 -F, --download-files                     If you crawl, download every file to disk.
 -c, --not-countrys                       Do not resolve the country name for every IP 
                                          and hostname.
 -C, --not-colors                         Do not use colored output.
 -q, --not-spf                            Do not check SPF records.
 -k, --random-domains                     Find this amount of domains from google 
                                          and analyze them. For base domain use -d
 -v, --ignore-host-pattern                When using nmap to find active hosts and 
                                          to port scan, ignore hosts which names 
                                          match this pattern. Separete them with commas.
 -x, --nmap-scantype                      Nmap parameters to port scan. 
                                          Defaults to: '-O --reason 
                                          --webxml --traceroute -sS -sV -sC -PN 
                                          -n -v -F'.
 -b, --robtex-domains                     If we found a DNS server with zone transfer 
                                          activated, search other UNrelated domains 
                                          using that DNS server with robtex and analyze 
                                          them too.
 -B, --all-robtex                         Like -b, but also if no Zone Transfer was 
                                          found. Useful to analyze all the domains in 
                                          one corporative DNS server. Includes also -b.
Press CTRL-C at any time to stop only the current step.

Crawler

Its main features are:
  • Crawl HTTP and https websites.
  • Crawl HTTP and https websites not using common ports.
  • Uses regular expressions to find ‘href’ and ‘src’ html tag. Also content links.
  • Identifies relative links.
  • Identifies domain related emails.
  • Identifies directory indexing.
  • Detects references to URLs like ‘file:’, ‘feed=’, ‘mailto:’, ‘javascript:’ and others.
  • Uses CTRL-C to stop current crawler stages and continue working.
  • Identifies file extensions (zip, swf, sql, rar, etc.)
  • Download files to a directory:
    • Download every important file (images, documents, compressed files).
    • Or download specified files types.
    • Or download a predefined set of files (like ‘document’ files: .doc, .xls, .pdf, .odt, .gnumeric, etc.).
  • Maximum amount of links to crawl. A default value of 5000 URLs is set.
  • Follows redirections using HTML and JavaScript Location tag and HTTP response code.

Crawler Usage:

Usage: crawler.py <options>

Options:
  -u, --url                            URL to start crawling.

  -m, --max-amount-to-crawl            Max deep to crawl. Using breadth first algorithm

  -w, --write-to-file                  Save summary of crawling to a text file.
                                       Output directory is created automatically.

  -s, --subdomains                     Also scan subdomains matching with url domain.

  -r, --follow-redirect                Do not follow redirect. By default follow 
                                       redirection at main URL.

  -f, --fetch-files                    Download there every file detected in 
                                       'Files' directory. Overwrite existing content.

  -F, --file-extension                 Download files specified by comma separated 
                                       extensions. This option also activates 
                                       'fetch-files' option. 'Ex.: -F pdf,xls,doc'

  -d, --docs-files                     Download docs files:xls,pdf,doc,docx,txt,odt,
                                       gnumeric,csv, etc. This option also activates 
                                       'fetch-files' option.

  -E, --exclude-extensions             Do not download files that matches with this 
                                       extensions. Options '-f','-F' or '-d' needed.

  -h, --help                           Show this help message and exit.

  -V, --version                        Output version information and exit.

  -v, --verbose                        Be verbose

  -D, --debug                          Debug.

Installation

Untar the .tar.gz file and copy the python files to the /usr/bin/ directory. Domain_analyzer needs to be run as root. The crawler can be run as a non-privileged user. If you want all the features (web crawler, pdf and colors), which is nice, also copy these files to /usr/bin or /usr/local/bin:
  • ansistrm.py
  • crawler.py
  • pyText2pdf.py
If you have any issues with the GeoIP database, download it from its original source. And install it in where your system needs it, usually at /opt/local/share/GeoIP/GeoIP.dat.

Examples:

  • Find 10 random domains in the .gov domain and analyze them fully (including web crawling). If it finds some Zone Transfer, retrieve more domains using them from Robtex.
domain_analyzer.py -d .gov -k 10 -b
  • (Very Quick and dirty) Find everything related with .edu.cn domain, store everything in directories. Do not search for active host, do not nmap scan them, do not reverse-dns the netblock, do not search for emails.
domain_analyzer.py -d edu.cn -b -o -g -a -n
  • Analyze the 386.edu.ru domain fully.
domain_analyzer.py -d 386.edu.ru -b -o
  • (Pen tester mode). Analyze a domain fully. Do not find other domains. Print everything in a pdf file. Store everything on disk. When finished open Zenmap and show me the topology every host found at the same time!
domain_analyzer.py -d amigos.net -o -e
  • (Quick with web crawl only). Ignore everything with ‘google’ on it.
domain_analyzer.py -d mil.cn -b -o -g -a -n -v google -x '-O --reason --webxml
--traceroute -sS -sV -sC -PN -n -v -p 80,4443'
  • (Everything) Crawl up to 100 URLs of this site including subdomains. Store output into a file and download every INTERESTING file found to disk.
crawler.py -u www.386.edu.ru -w -s -m 100 -f
  • (Quick and dirty) Crawl the site very quick. Do not download files. Store the output to a file.
crawler.py -u www.386.edu.ru -w -m 20
  • (If you want to analyze metadata later with lafoca). Verbose prints which extensions are being downloaded. Download only the set of archives corresponding to Documents (.doc, .docx, .ppt, .xls, .odt. etc.)
crawler.py -u ieeeexplore.ieee.org/otherfiles/ -d -v

COMMENTS

Name

11th,2,12th,20,12th Chemistry,5,12th Computer Science,7,12th Physics,1,5th Sem CSE,1,AAI ATC,2,Android,18,Banking,1,Blogger,41,Books,5,BTech,17,CBSE,22,CSE,4,ECE,3,Electronics,1,English,2,ESE,1,Ethical Hacking,61,Exams,5,Games,9,GATE,1,GATE ECE,1,Government Jobs,1,GS,1,How To,27,IBPS PO,1,Information,52,Internet,24,IPU,8,JEE,8,JEE Mains,8,Jobs,1,Linux,65,News,18,Notes,23,Physics,3,Placement,10,PO,1,Poetry,3,RRB,1,SEO,11,Softwares,38,SSC,2,SSC CGL,1,SSC GS,2,Tips and Tricks,46,UPSC,1,Windows,46,
ltr
item
SolutionRider- One Stop Solution for Notes, Exams Prep, Jobs & Technical Blogs.: Domain Analyzer – Tool For Analyzing the Security of a Domain
Domain Analyzer – Tool For Analyzing the Security of a Domain
Domain Analyzer – Tool For Analyzing the Security of a Domain Domain Analyzer is a security analysis tool which automatically discovers and reports information about the given domain. Its main purpose is to analyze domains in an unattended way. It takes a domain name and finds information about it, such as DNS servers, mail servers, IP addresses, mails on Google, SPF information, etc. After all the information is stored and organized it scans the ports of every IP found using nmap and perform several other security checks. After the ports are found, it uses the tool crawler.py to spider the complete web page of all the web ports found. This tool has the option to download files and find open folders. Features: Creates a directory with all the information, including nmap output files. Uses colors to remark important information on the console. Detects some security problems like hostname problems, unusual port numbers and zone transfers. Heavily tested and it is very robust against DNS configuration problems. Uses nmap for active host detection, port scanning and version information (including nmap scripts). Searches for SPF records information to find new hostnames or IP addresses. Searches for reverse DNS names and compare them to the hostname. Prints out the country of every IP address. Creates a PDF file with results. Automatically detects and analyze sub-domains! Searches for domains emails. Checks the 192 most common hostnames in the DNS servers. Checks for Zone Transfer on every DNS server. Finds the reverse names of the /24 network range of every IP address. Finds active host using nmap complete set of techniques. Scan ports using nmap. Searches for host and port information using nmap. Automatically detects web servers used. Crawls every web server page using our Web Crawler Security Tool. Filters out hostnames based on their name. Pseudo-randomly searches N domains in google and automatically analyze them! Uses CTRL-C to stop current analysis stage and continue working. It can read an external file with domain names and try to find them on the domain. Domain Analyzer – Tool For Analyzing the Security of a Domain Usage: usage: -d options: -h, --help Show this help message and exit. -V, --version Output version information and exit. -D, --debug Debug. -d, --domain Domain to analyze. -L , --common-hosts-list Relative path to txt file containing common hostnames. One name per line. -j, --not-common-hosts-names Do not check common host names. Quicker but you will lose hosts. -t, --not-zone-transfer Do not attempt to transfer the zone. -n, --not-net-block Do not attempt to -sL each IP netblock. -o, --store-output Store everything in a directory named as the domain. Nmap output files and the summary are stored inside. -a, --not-scan-or-active Do not use nmap to scan ports nor to search for active hosts. -p, --not-store-nmap Do not store any nmap output files in the directory /nmap. -e, --zenmap Move xml nmap files to a directory and open zenmap with the topology of the whole group. Your user should have access to the DISPLAY variable. -g, --not-goog-mail Do not use goog-mail.py (embebed) to look for emails for each domain -s, --not-subdomains Do not analyze sub-domains recursively. You will lose subdomain internal information. -f, --create-pdf Create a pdf file with all the information. -l, --world-domination Scan every gov,mil,org and net domains of every country on the world. Interesting if you don't use -s -r, --robin-hood Send the pdf report to every email found using domains the MX servers found. Good girl. -w, --not-webcrawl Do not web crawl every web site (in every port) we found looking for public web mis-configurations (Directory listing, etc.). -m, --max-amount-to-crawl If you crawl, do it up to this amount of links for each web site. Defaults to 50. -F, --download-files If you crawl, download every file to disk. -c, --not-countrys Do not resolve the country name for every IP and hostname. -C, --not-colors Do not use colored output. -q, --not-spf Do not check SPF records. -k, --random-domains Find this amount of domains from google and analyze them. For base domain use -d -v, --ignore-host-pattern When using nmap to find active hosts and to port scan, ignore hosts which names match this pattern. Separete them with commas. -x, --nmap-scantype Nmap parameters to port scan. Defaults to: '-O --reason --webxml --traceroute -sS -sV -sC -PN -n -v -F'. -b, --robtex-domains If we found a DNS server with zone transfer activated, search other UNrelated domains using that DNS server with robtex and analyze them too. -B, --all-robtex Like -b, but also if no Zone Transfer was found. Useful to analyze all the domains in one corporative DNS server. Includes also -b. Press CTRL-C at any time to stop only the current step. Crawler Its main features are: Crawl HTTP and https websites. Crawl HTTP and https websites not using common ports. Uses regular expressions to find ‘href’ and ‘src’ html tag. Also content links. Identifies relative links. Identifies domain related emails. Identifies directory indexing. Detects references to URLs like ‘file:’, ‘feed=’, ‘mailto:’, ‘javascript:’ and others. Uses CTRL-C to stop current crawler stages and continue working. Identifies file extensions (zip, swf, sql, rar, etc.) Download files to a directory: Download every important file (images, documents, compressed files). Or download specified files types. Or download a predefined set of files (like ‘document’ files: .doc, .xls, .pdf, .odt, .gnumeric, etc.). Maximum amount of links to crawl. A default value of 5000 URLs is set. Follows redirections using HTML and JavaScript Location tag and HTTP response code. Crawler Usage: Usage: crawler.py Options: -u, --url URL to start crawling. -m, --max-amount-to-crawl Max deep to crawl. Using breadth first algorithm -w, --write-to-file Save summary of crawling to a text file. Output directory is created automatically. -s, --subdomains Also scan subdomains matching with url domain. -r, --follow-redirect Do not follow redirect. By default follow redirection at main URL. -f, --fetch-files Download there every file detected in 'Files' directory. Overwrite existing content. -F, --file-extension Download files specified by comma separated extensions. This option also activates 'fetch-files' option. 'Ex.: -F pdf,xls,doc' -d, --docs-files Download docs files:xls,pdf,doc,docx,txt,odt, gnumeric,csv, etc. This option also activates 'fetch-files' option. -E, --exclude-extensions Do not download files that matches with this extensions. Options '-f','-F' or '-d' needed. -h, --help Show this help message and exit. -V, --version Output version information and exit. -v, --verbose Be verbose -D, --debug Debug. Installation Untar the .tar.gz file and copy the python files to the /usr/bin/ directory. Domain_analyzer needs to be run as root. The crawler can be run as a non-privileged user. If you want all the features (web crawler, pdf and colors), which is nice, also copy these files to /usr/bin or /usr/local/bin: ansistrm.py crawler.py pyText2pdf.py If you have any issues with the GeoIP database, download it from its original source. And install it in where your system needs it, usually at /opt/local/share/GeoIP/GeoIP.dat. Examples: Find 10 random domains in the .gov domain and analyze them fully (including web crawling). If it finds some Zone Transfer, retrieve more domains using them from Robtex. domain_analyzer.py -d .gov -k 10 -b (Very Quick and dirty) Find everything related with .edu.cn domain, store everything in directories. Do not search for active host, do not nmap scan them, do not reverse-dns the netblock, do not search for emails. domain_analyzer.py -d edu.cn -b -o -g -a -n Analyze the 386.edu.ru domain fully. domain_analyzer.py -d 386.edu.ru -b -o (Pen tester mode). Analyze a domain fully. Do not find other domains. Print everything in a pdf file. Store everything on disk. When finished open Zenmap and show me the topology every host found at the same time! domain_analyzer.py -d amigos.net -o -e (Quick with web crawl only). Ignore everything with ‘google’ on it. domain_analyzer.py -d mil.cn -b -o -g -a -n -v google -x '-O --reason --webxml --traceroute -sS -sV -sC -PN -n -v -p 80,4443' (Everything) Crawl up to 100 URLs of this site including subdomains. Store output into a file and download every INTERESTING file found to disk. crawler.py -u www.386.edu.ru -w -s -m 100 -f (Quick and dirty) Crawl the site very quick. Do not download files. Store the output to a file. crawler.py -u www.386.edu.ru -w -m 20 (If you want to analyze metadata later with lafoca). Verbose prints which extensions are being downloaded. Download only the set of archives corresponding to Documents (.doc, .docx, .ppt, .xls, .odt. etc.) crawler.py -u ieeeexplore.ieee.org/otherfiles/ -d -v Download Domain Analyzer domain name analyzer free domain analyzer github online domain name analyzer elasticsearch domain name analyzer domain name analyzer pro v6 serial domain name generator software domain traffic estimator domain name management software domain analyzer github domain analyzer tool domain analyzer pro domain analyzer online domain name analyzer domain worth analyzer modulation domain analyzer domain name analyzer download domain name analyzer mac domain name analyzer online domain analyzer dual domain audio analyzer domain backlink analyzer domain controller analyzer code domain analyzer cross domain analyzer domain controller best practices analyzer download domain name analyzer pro download dns domain analyzer elasticsearch domain analyzer expired domain analyzer domain name analyzer free download domain name analyzer for mac time domain frequency analyzer frequency domain analyzer free domain analyzer google domain analyzer cross domain analysis information analyzer domain keyword analyzer domain controller log analyzer modulation domain analyzer wiki domain name analyzer pro domain name analyzer pro v6 serial domain name analyzer tool domain name analyzer pro serial time domain network analyzer hostingdude domain name analyzer domain popularity analyzer domain best practices analyzer domain controller best practices analyzer protein domain analyzer pulse domain analysis domain name analyzer pro v4 domain analyzer security tool domain seo analyzer domain security analyzer domain name analyzer software time domain spectrum analyzer sub domain analyzer frequency domain analysis spectrum analyzer domain analyzer php script domain traffic analyzer domain name traffic analyzer time domain analyzer domain name analyzer v6 domain name analyzer pro v6 domain name analyzer professional v6 check domain website analyzer domain name worth analyzer windows domain analyzer web domain analyzer domain name analyzer pro 6 serial
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiryhK0R8YX7s2dICmLr71_j5jOfavBr7o5OCE0UFlXDgeDSG9ZoV1ox3Cb0i7fALdnl1wYflQcZ1vB9GoRdqTQ6JXLWjE76jlFmdGdKBtJUoeQrMp4wvBImHlYaCwxmTHw9FSrXLQ36uZa/s640/Domian+Analyser.png
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiryhK0R8YX7s2dICmLr71_j5jOfavBr7o5OCE0UFlXDgeDSG9ZoV1ox3Cb0i7fALdnl1wYflQcZ1vB9GoRdqTQ6JXLWjE76jlFmdGdKBtJUoeQrMp4wvBImHlYaCwxmTHw9FSrXLQ36uZa/s72-c/Domian+Analyser.png
SolutionRider- One Stop Solution for Notes, Exams Prep, Jobs & Technical Blogs.
https://thesolutionrider.blogspot.com/2017/11/domain-analyzer-tool-for-analyzing.html
https://thesolutionrider.blogspot.com/
https://thesolutionrider.blogspot.com/
https://thesolutionrider.blogspot.com/2017/11/domain-analyzer-tool-for-analyzing.html
true
6820083649286484786
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS CONTENT IS PREMIUM Please share to unlock Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy