开发者

Is there a way to automatically grab all the elements on the page using Selenium?

When creating tests for .Net applications, I can use the White library to find all elements of a given type. I can then write these elements to an Xml file, so they can be referenced and used for GUI tests. This is much faster than manually recording each individual element's info, so I would like to do the same for web applications using Selenium. I haven't been able to find any info on this yet.

I would like to be able to search for every element of a given type and save its information (location/XPath, value, and label) so I can write it to a text file later.

Here is the ideal workflow I'm trying to get to:

navigate_to_page(http://loginscreen.com)
log_in
open_account
button_elements = gr开发者_开发技巧ab_elements_of_type(button) # this will return an array of XPaths and Names/IDs/whatever - some way of identifying each grabbed element

That code can run once, and I can then re-run it should any elements get changed, added, or removed.

I can then have another custom function iterate through the array, saving the info in a format I can use later easily; in this case, a Ruby class containing a list of constants:

LOGIN_BUTTON = "//div[1]/loginbutton"
EXIT_BUTTON = "//div[2]/exitbutton"

I can then write tests that look like this:

log_in # this will use the info that was automatically grabbed beforehand
current_screen.should == "Profile page"

Right now, every time I want to interact with a new element, I have to manually go to the page, select it, open it with XPather, and copy the XPath to whatever file I want my code to look at. This takes up a lot of time that could otherwise be spent writing code.


Ultimately what you're looking for is extracting the information you've recorded in your test into a reusable component.

  1. Record your tests in Firefox using the Selenium IDE plugin.
  2. Export your recorded test into a .cs file (assuming .NET as you mentioned White, but Ruby export options are also available)
  3. Extract the XPath / CSS Ids and encapsulate them into a reusable classes and use the PageObject pattern to represent each page.

Using the above technique, you only need to update your PageObject with updated locators instead of re-recording your tests.


Update:

You want to automate the record portion? Sounds awkward. Maybe you want to extract all the hyperlinks off a particular page and perform the same action on them?

You should use Selenium's object model to script against the DOM.

[Test]
public void GetAllHyperLinks()
{
    IWebDriver driver = new FireFoxDriver();
    driver.Navigate().GoToUrl("http://youwebsite");

    ReadOnlyCollection<IWebElement> query 
             = driver.FindElements( By.XPath("//yourxpath") );

    // iterate through collection and access whatever you want
    // save it to a file, update a database, etc...

}

Update 2:

Ok, so I understand your concerns now. You're looking to get the locators out of a web page for future reference. The challenge is in constructing the locator!

There are going to be some challenges with constructing your locators, especially if there are more than one instance, but you should be able to get far enough using CSS based locators which Selenium supports.

For example, you could find all hyperlinks using an xpath "//a", and then use Selenium to construct a CSS locator. You may have to customize the locator to suit your needs, but an example locator might be using the css class and text value of the hyperlink.

//a[contains(@class,'adminLink')][.='Edit']

// selenium 2.0 syntax
[Test]
public void GetAllHyperLinks()
{
    IWebDriver driver = new FireFoxDriver();
    driver.Navigate().GoToUrl("http://youwebsite");

    ReadOnlyCollection<IWebElement> query 
             = driver.FindElements( By.XPath("//a") );

    foreach(IWebElement hyperLink in query)
    {
        string locatorFormat = "//a[contains(@class,'{0}')][.='{1}']";

        string locator = String.Format(locatorFormat,
                                   hyperlink.GetAttribute("class"),
                                   hyperlink.Value);

        // spit out the locator for reference.
    }

}

You're still going to need to associate the Locator to your code file, but this should at least get you started by extracting the locators for future use.

Here's an example of crawling links using Selenium 1.0 http://devio.wordpress.com/2008/10/24/crawling-all-links-with-selenium-and-nunit/


Selenium runs on browser side, even if you can grab all the elements, there is no way to save it in a file. As I know , Selenium is not design for that kinds of work.


You need to get the entire source of the page? if so, try the GetHtmlSource method http://release.seleniumhq.org/selenium-remote-control/0.9.0/doc/dotnet/html/Selenium.DefaultSelenium.GetHtmlSource.html

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜