Using regular expressions to parse HTML: why not?

RegexHtml Parsing

Regex Problem Overview


It seems like every question on stackoverflow where the asker is using regex to grab some information from HTML will inevitably have an "answer" that says not to use regex to parse HTML.

Why not? I'm aware that there are quote-unquote "real" HTML parsers out there like Beautiful Soup, and I'm sure they're powerful and useful, but if you're just doing something simple, quick, or dirty, then why bother using something so complicated when a few regex statements will work just fine?

Moreover, is there just something fundamental that I don't understand about regex that makes them a bad choice for parsing in general?

Regex Solutions


Solution 1 - Regex

Entire HTML parsing is not possible with regular expressions, since it depends on matching the opening and the closing tag which is not possible with regexps.

Regular expressions can only match regular languages but HTML is a context-free language and not a regular language (As @StefanPochmann pointed out, regular languages are also context-free, so context-free doesn't necessarily mean not regular). The only thing you can do with regexps on HTML is heuristics but that will not work on every condition. It should be possible to present a HTML file that will be matched wrongly by any regular expression.

Solution 2 - Regex

For quick´n´dirty regexp will do fine. But the fundamental thing to know is that it is impossible to construct a regexp that will correctly parse HTML.

The reason is that regexps can’t handle arbitarly nested expressions. See Can regular expressions be used to match nested patterns?

Solution 3 - Regex

(From http://htmlparsing.com/regexes)

Say you've got a file of HTML where you're trying to extract URLs from <img> tags.

<img src="http://example.com/whatever.jpg">

So you write a regex like this in Perl:

if ( $html =~ /<img src="(.+)"/ ) {
    $url = $1;
}

In this case, $url will indeed contain http://example.com/whatever.jpg. But what happens when you start getting HTML like this:

<img src='http://example.com/whatever.jpg'>

or

<img src=http://example.com/whatever.jpg>

or

<img border=0 src="http://example.com/whatever.jpg">

or

<img
    src="http://example.com/whatever.jpg">

or you start getting false positives from

<!-- // commented out
<img src="http://example.com/outdated.png">
-->

It looks so simple, and it might be simple for a single, unchanging file, but for anything that you're going to be doing on arbitrary HTML data, regexes are just a recipe for future heartache.

Solution 4 - Regex

Two quick reasons:

  • writing a regex that can stand up to malicious input is hard; way harder than using a prebuilt tool
  • writing a regex that can work with the ridiculous markup that you will inevitably be stuck with is hard; way harder than using a prebuilt tool

Regarding the suitability of regexes for parsing in general: they aren't suitable. Have you ever seen the sorts of regexes you would need to parse most languages?

Solution 5 - Regex

As far as parsing goes, regular expressions can be useful in the "lexical analysis" (lexer) stage, where the input is broken down into tokens. It's less useful in the actual "build a parse tree" stage.

For an HTML parser, I'd expect it to only accept well-formed HTML and that requires capabilities outside what a regular expression can do (they cannot "count" and make sure that a given number of opening elements are balanced by the same number of closing elements).

Solution 6 - Regex

Because there are many ways to "screw up" HTML that browsers will treat in a rather liberal way but it would take quite some effort to reproduce the browser's liberal behaviour to cover all cases with regular expressions, so your regex will inevitably fail on some special cases, and that would possibly introduce serious security gaps in your system.

Solution 7 - Regex

The problem is that most users who ask a question that has to do with HTML and regex do this because they can't find an own regex that works. Then one has to think whether everything would be easier when using a DOM or SAX parser or something similar. They are optimized and constructed for the purpose of working with XML-like document structures.

Sure, there are problems that can be solved easily with regular expressions. But the emphasis lies on easily.

If you just want to find all URLs that look like http://.../ you're fine with regexps. But if you want to find all URLs that are in a a-Element that has the class 'mylink' you probably better use a appropriate parser.

Solution 8 - Regex

Regular expressions were not designed to handle a nested tag structure, and it is at best complicated (at worst, impossible) to handle all the possible edge cases you get with real HTML.

Solution 9 - Regex

I believe that the answer lies in computation theory. For a language to be parsed using regex it must be by definition "regular" (link). HTML is not a regular language as it does not meet a number of criteria for a regular language (much to do with the many levels of nesting inherent in html code). If you are interested in the theory of computation I would recommend this book.

Solution 10 - Regex

HTML/XML is divided into markup and content. Regex is only useful doing a lexical tag parse. I guess you could deduce the content. It would be a good choice for a SAX parser. Tags and content could be delivered to a user defined function where nesting/closure of elements can be kept track of.

As far as just parsing the tags, it can be done with regex and used to strip tags from a document.

Over years of testing, I've found the secret to the way browsers parse tags, both well and ill formed.

The normal elements are parsed with this form:

The core of these tags use this regex

 (?:
      " [\S\s]*? " 
   |  ' [\S\s]*? ' 
   |  [^>]? 
 )+

You'll notice this [^>]? as one of the alternations. This will match unbalanced quotes from ill-formed tags.

It is also, the single most root of all evil to regular expressions. The way it's used will trigger a bump-along to satisfy it's greedy, must-match quantified container.

If used passively, there is never a problem But, if you force something to match by interspersing it with a wanted attribute/value pair, and don't provide adequate protection from backtracking, it's an out of control nightmare.

This is the general form for just plain old tags. Notice the [\w:] representing the tag name? In reality, the legal characters representing the tag name are an incredible list of Unicode characters.

 <     
 (?:
      [\w:]+ 
      \s+ 
      (?:
           " [\S\s]*? " 
        |  ' [\S\s]*? ' 
        |  [^>]? 
      )+
      \s* /?
 )
 >

Moving on, we also see that you just can't search for a specific tag without parsing ALL tags. I mean you could, but it would have to use a combination of verbs like (*SKIP)(*FAIL) but still all tags have to be parsed.

The reason is that tag syntax may be hidden inside other tags, etc.

So, to passively parse all tags, a regex is needed like the one below. This particular one matches invisible content as well.

As new HTML or xml or any other develop new constructs, just add it as one of the alternations.


Web page note - I've never seen a web page (or xhtml/xml) that this
had trouble with. If you find one, let me know.

Performance note - It's quick. This is the fastest tag parser I've seen
(there may be faster, who knows).
I have several specific versions. It is also excellent as scraper
(if you're the hands-on type).


Complete raw regex

<(?:(?:(?:(script|style|object|embed|applet|noframes|noscript|noembed)(?:\s+(?>"[\S\s]*?"|'[\S\s]*?'|(?:(?!/>)[^>])?)+)?\s*>)[\S\s]*?</\1\s*(?=>))|(?:/?[\w:]+\s*/?)|(?:[\w:]+\s+(?:"[\S\s]*?"|'[\S\s]*?'|[^>]?)+\s*/?)|\?[\S\s]*?\?|(?:!(?:(?:DOCTYPE[\S\s]*?)|(?:\[CDATA\[[\S\s]*?\]\])|(?:--[\S\s]*?--)|(?:ATTLIST[\S\s]*?)|(?:ENTITY[\S\s]*?)|(?:ELEMENT[\S\s]*?))))>

Formatted look

 <
 (?:
      (?:
           (?:
                # Invisible content; end tag req'd
                (                             # (1 start)
                     script
                  |  style
                  |  object
                  |  embed
                  |  applet
                  |  noframes
                  |  noscript
                  |  noembed 
                )                             # (1 end)
                (?:
                     \s+ 
                     (?>
                          " [\S\s]*? "
                       |  ' [\S\s]*? '
                       |  (?:
                               (?! /> )
                               [^>] 
                          )?
                     )+
                )?
                \s* >
           )
           
           [\S\s]*? </ \1 \s* 
           (?= > )
      )
      
   |  (?: /? [\w:]+ \s* /? )
   |  (?:
           [\w:]+ 
           \s+ 
           (?:
                " [\S\s]*? " 
             |  ' [\S\s]*? ' 
             |  [^>]? 
           )+
           \s* /?
      )
   |  \? [\S\s]*? \?
   |  (?:
           !
           (?:
                (?: DOCTYPE [\S\s]*? )
             |  (?: \[CDATA\[ [\S\s]*? \]\] )
             |  (?: -- [\S\s]*? -- )
             |  (?: ATTLIST [\S\s]*? )
             |  (?: ENTITY [\S\s]*? )
             |  (?: ELEMENT [\S\s]*? )
           )
      )
 )
 >

Solution 11 - Regex

This expression retrieves attributes from HTML elements. It supports:

  • unquoted / quoted attributes,
  • single / double quotes,
  • escaped quotes inside attributes,
  • spaces around equals signs,
  • any number of attributes,
  • check only for attributes inside tags,
  • escape comments, and
  • manage different quotes within an attribute value.

(?:\<\!\-\-(?:(?!\-\-\>)\r\n?|\n|.)*?-\-\>)|(?:<(\S+)\s+(?=.*>)|(?<=[=\s])\G)(?:((?:(?!\s|=).)*)\s*?=\s*?[\"']?((?:(?<=\")(?:(?<=\\)\"|[^\"])*|(?<=')(?:(?<=\\)'|[^'])*)|(?:(?!\"|')(?:(?!\/>|>|\s).)+))[\"']?\s*)

Check it out. It works better with the "gisx" flags, as in the demo.

Solution 12 - Regex

"It depends" though. It's true that regexes don't and can't parse HTML with true accuracy, for all the reasons given here. If, however, the consequences of getting it wrong (such as not handling nested tags) are minor, and if regexes are super-convenient in your environment (such as when you're hacking Perl), go ahead.

Suppose you're, oh, maybe parsing web pages that link to your site--perhaps you found them with a Google link search--and you want a quick way to get a general idea of the context surround your link. You're trying to run a little report that might alert you to link spam, something like that.

In that case, misparsing some of the documents isn't going to be a big deal. Nobody but you will see the mistakes, and if you're very lucky there will be few enough that you can follow up individually.

I guess I'm saying it's a tradeoff. Sometimes implementing or using a correct parser--as easy as that may be--might not be worth the trouble if accuracy isn't critical.

Just be careful with your assumptions. I can think of a few ways the regexp shortcut can backfire if you're trying to parse something that will be shown in public, for example.

Solution 13 - Regex

There are definitely cases where using a regular expression to parse some information from HTML is the correct way to go - it depends a lot on the specific situation.

The consensus above is that in general it is a bad idea. However if the HTML structure is known (and unlikely to change) then it is still a valid approach.

Solution 14 - Regex

Keep in mind that while HTML itself isn't regular, portions of a page you are looking at might be regular.

For example, it is an error for <form> tags to be nested; if the web page is working correctly, then using a regular expression to grab a <form> would be completely reasonable.

I recently did some web scraping using only Selenium and regular expressions. I got away with it because the data I wanted was put in a <form>, and put in a simple table format (so I could even count on <table>, <tr> and <td> to be non-nested--which is actually highly unusual). In some degree, regular expressions were even almost necessary, because some of the structure I needed to access was delimited by comments. (Beautiful Soup can give you comments, but it would have been difficult to grab <!-- BEGIN --> and <!-- END --> blocks using Beautiful Soup.)

If I had to worry about nested tables, however, my approach simply would not have worked! I would have had to fall back on Beautiful Soup. Even then, however, sometimes you can use a regular expression to grab the chunk you need, and then drill down from there.

Solution 15 - Regex

Actually, HTML parsing with regex is perfectly possible in PHP. You just have to parse the whole string backwards using strrpos to find < and repeat the regex from there using ungreedy specifiers each time to get over nested tags. Not fancy and terribly slow on large things, but I used it for my own personal template editor for my website. I wasn't actually parsing HTML, but a few custom tags I made for querying database entries to display tables of data (my <#if()> tag could highlight special entries this way). I wasn't prepared to go for an XML parser on just a couple of self created tags (with very non-XML data within them) here and there.

So, even though this question is considerably dead, it still shows up in a Google search. I read it and thought "challenge accepted" and finished fixing my simple code without having to replace everything. Decided to offer a different opinion to anyone searching for a similar reason. Also the last answer was posted 4 hours ago so this is still a hot topic.

Solution 16 - Regex

I tried my hand at a regex for this too. It's mostly useful for finding chunks of content paired with the next HTML tag, and it doesn't look for matching close tags, but it will pick up close tags. Roll a stack in your own language to check those.

Use with 'sx' options. 'g' too if you're feeling lucky:

(?P<content>.*?)                # Content up to next tag
(?P<markup>                     # Entire tag
  <!\[CDATA\[(?P<cdata>.+?)]]>| # <![CDATA[ ... ]]>
  <!--(?P<comment>.+?)-->|      # <!-- Comment -->
  </\s*(?P<close_tag>\w+)\s*>|  # </tag>
  <(?P<tag>\w+)                 # <tag ...
    (?P<attributes>
      (?P<attribute>\s+
# <snip>: Use this part to get the attributes out of 'attributes' group.
        (?P<attribute_name>\w+)
        (?:\s*=\s*
          (?P<attribute_value>
            [\w:/.\-]+|         # Unquoted
            (?=(?P<_v>          # Quoted
              (?P<_q>['\"]).*?(?<!\\)(?P=_q)))
            (?P=_v)
          ))?
# </snip>
      )*
    )\s*
  (?P<is_self_closing>/?)   # Self-closing indicator
  >)                        # End of tag

This one is designed for Python (it might work for other languages, haven't tried it, it uses positive lookaheads, negative lookbehinds, and named backreferences). Supports:

  • Open Tag - <div ...>
  • Close Tag - </div>
  • Comment - <!-- ... -->
  • CDATA - <![CDATA[ ... ]]>
  • Self-Closing Tag - <div .../>
  • Optional Attribute Values - <input checked>
  • Unquoted / Quoted Attribute Values - <div style='...'>
  • Single / Double Quotes - <div style="...">
  • Escaped Quotes - <a title='John\'s Story'>
    (this isn't really valid HTML, but I'm a nice guy)
  • Spaces Around Equals Signs - <a href = '...'>
  • Named Captures For Interesting Bits

It's also pretty good about not triggering on malformed tags, like when you forget a < or >.

If your regex flavor supports repeated named captures then you're golden, but Python re doesn't (I know regex does, but I need to use vanilla Python). Here's what you get:

  • content - All of the content up to the next tag. You could leave this out.
  • markup - The entire tag with everything in it.
  • comment - If it's a comment, the comment contents.
  • cdata - If it's a <![CDATA[...]]>, the CDATA contents.
  • close_tag - If it's a close tag (</div>), the tag name.
  • tag - If it's an open tag (<div>), the tag name.
  • attributes - All attributes inside the tag. Use this to get all attributes if you don't get repeated groups.
  • attribute - Repeated, each attribute.
  • attribute_name - Repeated, each attribute name.
  • attribute_value - Repeated, each attribute value. This includes the quotes if it was quoted.
  • is_self_closing - This is / if it's a self-closing tag, otherwise nothing.
  • _q and _v - Ignore these; they're used internally for backreferences.

If your regex engine doesn't support repeated named captures, there's a section called out that you can use to get each attribute. Just run that regex on the attributes group to get each attribute, attribute_name and attribute_value out of it.

Demo here: https://regex101.com/r/mH8jSu/11

Solution 17 - Regex

Regular expressions are not powerful enough for such a language like HTML. Sure, there are some examples where you can use regular expressions. But in general it is not appropriate for parsing.

Solution 18 - Regex

You, know...there's a lot of mentality of you CAN'T do it and I think that everyone on both sides of the fence are right and wrong. You CAN do it, but it takes a little more processing than just running one regex against it. Take this (I wrote this inside of an hour) as an example. It assumes the HTML is completely valid, but depending on what language you're using to apply the aforementioned regex, you could do some fixing of the HTML to make sure that it will succeed. For example, removing closing tags that are not supposed to be there: </img> for example. Then, add the closing single HTML forward slash to elements that are missing them, etc.

I'd use this in the context of writing a library that would allow me to perform HTML element retrieval akin to that of JavaScript's [x].getElementsByTagName(), for example. I'd just splice up the functionality I wrote in the DEFINE section of the regex and use it for stepping inside of a tree of elements, one at time.

So, will this be the final 100% answer for validating HTML? No. But it's a start and with a little more work, it can be done. However, trying to do it inside of one regex execution is not practical, nor efficient.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionntownsendView Question on Stackoverflow
Solution 1 - RegexJohannes WeissView Answer on Stackoverflow
Solution 2 - RegexkmkaplanView Answer on Stackoverflow
Solution 3 - RegexAndy LesterView Answer on Stackoverflow
Solution 4 - RegexHank GayView Answer on Stackoverflow
Solution 5 - RegexVatineView Answer on Stackoverflow
Solution 6 - RegexTamas CzinegeView Answer on Stackoverflow
Solution 7 - RegexokomanView Answer on Stackoverflow
Solution 8 - RegexPeter BoughtonView Answer on Stackoverflow
Solution 9 - RegextaggersView Answer on Stackoverflow
Solution 10 - Regexuser557597View Answer on Stackoverflow
Solution 11 - RegexIvan ChaerView Answer on Stackoverflow
Solution 12 - RegexcatfoodView Answer on Stackoverflow
Solution 13 - RegexJasonView Answer on Stackoverflow
Solution 14 - RegexalpheusView Answer on Stackoverflow
Solution 15 - RegexDejiView Answer on Stackoverflow
Solution 16 - RegexHounshellView Answer on Stackoverflow
Solution 17 - RegexGumboView Answer on Stackoverflow
Solution 18 - RegexErutan409View Answer on Stackoverflow