1

When I run the code below, it displays only the web page accessible through the last url listed in "domainslist.txt". It does not display the earlier web pages.

For example, if "domainslist.txt" contains:

http://example[1].com
http://example[2].com
http://example[3].com

Then the code only displays the web page from example[3].com.

Why does it not display all three?

function url_get_contents($Url) {
        if (!function_exists('curl_init')) {
            die('CURL is not installed!');
        }
        $ch = curl_init();
        curl_setopt($ch, CURLOPT_URL, $Url);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        $output = curl_exec($ch);
        curl_close($ch);
        return $output;
    }

    $urls = file("domainslist.txt", FILE_SKIP_EMPTY_LINES);

    foreach ($urls as $url) {

        echo(url_get_contents($url)); 

    }

NB If I create the array of URLs manually, like this:

$urls = array();
$urls[0] = "http://example[1].com";
$urls[1] = "http://example[2].com";
$urls[2] = "http://example[3].com";

then it works fine, displaying all 3 pages.

EDIT:

When I used var_dump($urls); there is a small difference between the results from the two different methods of forming the arrays. The first two URLs in the array created using file() have two extra characters reported in the string length - but the final URL (the one that displays) is the the right number of characters. However, when the array is created manually, there are no extra characters.

3 Answers 3

1

add flag FILE_IGNORE_NEW_LINES

file('domainslist.txt', FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES);

manual function file()

Sign up to request clarification or add additional context in comments.

Comments

1

Assuming that your example of your "domainslist.txt" file is how the file itself looks, add a slash ( / ) to the end of your URLs and your code will work.

Adding the flag FILE_IGNORE_NEW_LINES when you open the file should also do it.

Comments

1

Your code seems ok, so my guess there is something strange with the domainlist.txt

The first things you need to check is how often the foreach ($urls as $url) is executed and what the content of $url is. Should be 3 times and obviously 3 different URL's. Also check for extra data appended to the $url in case the file is in strange encoding format.

In short, try this for debugging and let us know the output.

function url_get_contents($Url) {
    if (!function_exists('curl_init')) {
        die('CURL is not installed!');
    }
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, trim($Url)); //added trim to fix unintented chars from domainlist.txt
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    $output = curl_exec($ch);
    curl_close($ch);
    return $output;
}

$urls = file("domainslist.txt", FILE_SKIP_EMPTY_LINES);

var_dump($urls);

foreach ($urls as $url) {
    var_dump($url);
    var_dump(url_get_contents($url));
}

6 Comments

Thanks for this. When I run the suggested code, it shows all the URLs from domainslist.txt are being stored in the $urls variable and are being iterated over by the foreach loop. However, the output is the same: the web page from the last url only.
so print_r($urls); from the domainslist.txt file is identical to a print_r($urls); if you create it manually like in your post $urls = array(); etc?
Don't know if this is relevant, but I have tried doing this with PHP's file_get_contents() function as well, with the same result.
Indeed, try var_dump instead of print_r and also a var_dump on $output in your function. Perhaps update your question with the full results.
Yes - the output of print_r($urls); is identical to the output when I manually create the array.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.