Struggling with my code right now. I'm going to try my best to get the description across.
My code is to search a Link passed in the command prompt, get the HTML code for the webpage at the Link, search the HTML code for links on the webpage, and then repeat these steps for the links found. I hope that is clear.
It should print out any links that cause errors.
Some more needed info:
The max visits it can do is 100, the code should be compilable on linux machines. If a website has an error, a None value is returned. Python3 is what I am using
eg) s = readwebpage(url)... This line of code gets the HTML code for the link(url) passed in its argument.... if the link has an error, s = None.
In the pictures provided below, when the command..
python3 VisitURL.py **http://ift.tt/1HnD0l1
I put ** around http:// because I can't post more than 2 links hehe
is passed in terminal, this is the output of the code.
The HTML code for that website has links that end in p2.html, p3.html, p4.html, and p5.html on its webpage. My code reads all of these, but it does not visit these links individually to search for more links. If it did this, it should search through these links and find a link that ends in p10.html, and then it should report that the link ending with p10.html has errors. Obviously it doesn't do that at the moment, and it's giving me a hard time.
Please help :)
Aucun commentaire:
Enregistrer un commentaire