As of version 7.62.0 'libcurl' has exposed its 'URL' parser. Tools are provided to parse 'URLs' using this new parser feature.
**UNTIL `curl`/`libcurl` general release at the end of October you _must_ use the development version which can be cloned and built from <https://github.com/curl/curl>.
**UNTIL `curl`/`libcurl` general release at the end of October you _must_ use the development version which can be cloned and built from <https://github.com/curl/curl>.**
`curlparse` includes a `url_parse()` function to make it easier to use this package for current users of `urltools::url_parse()` since it provides the same API and same results back (including it being a regular data frame and not a `tbl`).
Spoiler alert: `urltools::url_parse()` is faster by ~100µs (per-100 URLs) for "good" URLs (if there's a mix of gnarly/bad URLs and valid ones they get closer to being on-par). The aim was not to try to beat it, though.
>Per the [blog post introducing this new set of API calls](https://daniel.haxx.se/blog/2018/09/09/libcurl-gets-a-url-api/):
>
>Applications that pass in URLs to libcurl would of course still very often need to parse URLs, create URLs or otherwise handle them, but libcurl has not been helping with that.
>
>At the same time, the under-specification of URLs has led to a situation where there's really no stable document anywhere describing how URLs are supposed to work and basically every implementer is left to handle the WHATWG URL spec, RFC 3986 and the world in between all by themselves. Understanding how their URL parsing libraries, libcurl, other tools and their favorite browsers differ is complicated.
>
>By offering applications access to libcurl's own URL parser, we hope to tighten a problematic vulnerable area for applications where the URL parser library would believe one thing and libcurl another. This could and has sometimes lead to security problems. (See for example Exploiting URL Parser in Trending Programming Languages! by Orange Tsai)
So, using this library adds consistency with how `libcurl` sees and handles URLs.
```{r}
library(microbenchmark)
set.seed(0)
test_urls <- sample(blog_urls, 100) # pick 100 URLs at random
microbenchmark(
curlparse = curlparse::url_parse(test_urls),
urltools = urltools::url_parse(test_urls), # we loaded urltools before curlparse at the top so namespace loading wasn't a factor for the benchmarks
times = 500
) -> mb
mb
autoplot(mb)
```
The individual handlers are a bit more on-par but mostly still slower (except for `fragment()`). Note that `urltools` has no equivalent function to just extract query strings so that's not in the test.