Description: In this video we will look at how to store data mined from the web into a Database using Python. The pre-requisite is that you have already seen and understood the Crawling the web for Fun and Profit video. If you haven't, please go through that video first.
The ability to be able to mine terabytes of data using an automated crawler is really cool! One could write automated port scanners, vulnerability scanners, site crawlers etc and have them sweep through larger IP spaces to derive interesting insights for fun or for profit ;-) . However it is important to be able to store all that in a systematic, usable and easily analyzable format. The best way to do this is to store it into a database. This video is aimed at helping newbies and intermediate hackers to quickly get started with using Databases. The task we will try to accomplish is to insert the "link information" mined using the crawler from the previous video into the database.
We will use the default Postgresql install in Backtrack 3.0 in this video. Please download the python code, the PyGreSQL library and the simple.sql DB schema before you proceed to watch the video.
Tags: programming ,
Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying.
Followed along from your previous post. Another great tutorial. Very easy to follow :)