Mass Site Visitor
By : headshote Published on Tuesday, October 11, 2016,13:50:21 in Mobile
Mass Site Visitor (MSV) is designed to generate large amounts of traffic to websites of your choice through the list of proxies in a multi-threaded way. You can set referrers and user agents, which along with proxies will be randomly selected to create a browser session. Referrers, user agents and proxies should be stored in a text-file and can be appended to an already loaded list in the program by pressing the ‘Load the list’ button. Remember to add the word socks5 after the IP of a public socks4/5 proxy, separated by a whitespace. Results for this browser session you can select to save as an image snapshot for every visit (to see how web-site is seen from different IPs), with a customized behavior of the browser while it’s on page, you can set the bot to click some links too (check out Settings – Options window). There is an ability to filter out non-responsive proxies from the ones that work, and save them to file (Settings – Export Healthy Proxies). If you don’t have a proxy list of your own, and the one supplied with MSV isn’t satisfactory (with time those proxies might cease to work), there is an ability to scrap public proxies from the web, Settings – Import Proxies from the web, if you are scraping proxies from the web, you don’t have to worry about adding the proxy type for a socks proxy, program will save all the info in the right format. The log of events, that happened in a context of the program is available for the user to study. It will contain messages about the proxy scraping process, as well as the process of sending traffic through proxies to the target website. Newest and hottest video preview https://www.youtube.com/watch?v=Vcx4oEvt0pI Latest features include: 1. JavaScript referrer spoofing (for Google Analytics traffic sources) 2. Chromedriver (non-headless browser to influence video/audio view counts on web-sites that track the playback from visitors). Pro-tip: don’t use JS referrer spoofing with this feature, because media sharing sites don’t care much about traffic sources, that would be only useful for influencing Google Analytics. 3. More proxy filters for scraper. 4. Test all proxies (will use the connection timeout and number of attempts from the Options window) 5. Private proxy support, consult the manual as to how setup you proxy list to supply authentication info for a private proxy provider. 6. Editable browser dimensions. Check the Options menu, use the format, as you’ll see for the default resolutions, provided with the program. 7. Firefox geckodriver. For cases when Chrome is not enough. Nightly is a preferred version, because only it was working properly without crashes on different machines. 8. Experimental: Alexa toolbar/extension. Warning: works stably only with Firefox Nightly. 9. Multiple web-site destinations for a session. One web-site per line in a textifield – one of those will be randomly selected each time to visit. 10. Search for the element to click within an iframe. If a link is within an embedded third-party web-page, consult the manual on how to use it. If the link isn’t in an iframe – just leave the last 2 textfields of the script as they are (one with NO, another empty) Some tips about running and installing: 1. The program itself communicates with the headless browser through the socket interface, so to allow inter-process communication you will most likely have to allow Wondows Firewall to give both PhantomJS and MSV the green light (usually, firewall will straight up ask if you want to allow the program to communicate through network) 2. You might get it working on Windows XP and 10, but MSV was never tested on those 3. The folder will all the setting and outputs will be something like C:/Users/Your Username/Site Mass Visitor. If you never changed the default directory of saving scraped proxies, they are in that folder somewhere. 4. Don’t forget to reduce thread count if you are using low-end machine. Having several dozens of threads running browser sessions that are being created/destroyed constantly, and are trying to load some web-page is a performance-heavy task.