I am trying to automate a task and web scrape from a dynamic website.
This is the site: https://www.kaijinet.com/jpExpress/Default.aspx?f=company&cf=summary&cc=7203
It loads a java script upon retrieval and I've tried, among many other things:
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.implicitly_wait(20)
link = "https://www.kaijinet.com/jpExpress/Default.aspx?f=company&cf=summary&cc=7203"
driver.get(link)
selector ="#numberOfIssuedShares > div:nth-child(2) > div.stockDetail > table > tbody > tr > td:nth-child(2)"
shares = driver.find_element_by_css_selector(selector)
as well as adding some
import time
time.sleep(25)
Neither worked. The error is
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: #numberOfIssuedShares > div:nth-child(2) > div.stockDetail > table > tbody > tr > td:nth-child(2)
If I print the page source, using
print(driver.page_source)
I get a completely different source code compared to what the browser renders each time. Is there a way to get selenium to execute the page and then work with the dynamically generated html? I googled and none of the solutions that came up lead to a fix for this problem. Thanks!
.implicit_waitafter getting the link?