10 Scraping Berlin police reports

Over the next three chapters we will follow a small sample project. In this chapter we will briefly outline the research topic and scrape the data. In the next two chapters we will concern ourselfes with cleaning, transforming and analysing the data. First statistically, then grapically.

10.1 Topic and data

This sample project aims to analyse police reports in Berlin. Specifically, we will try to answer two questions:

  • Does the number of reports differ by district?
  • Does the number of reports differ over time?
    • over years
    • over months
    • over days of the week
    • over time of day

Note that these are ad hoc questions constructed for this sample project. In a real research project, we would have to motivate the research question more clearly and develop hyotheses based on theoretical considerations and existing research. These steps are skipped here to keep the focus on scraping the data and basic methods of data analysis.

Now that topic and questions are defined, we need some data to answer them. The website https://www.berlin.de/polizei/polizeimeldungen/archiv/ contains reports by the Berlin police that are open to the public, beginning with the year 2014. We will gather the links to all subpages, download them and extract the text of the report, as well as the date, time and district where it occurred.

10.2 Scraping the data

10.2.2 Downloading the subpages

Now that the links to all subpages are gathered, we can finally download them. Please note that the download will take up to 20 minutes due to the number of subpages (443 at the time of writing), and the additional waiting time of 2 seconds between each iteration.

pages <- pag_links %>% 
  map(~ {
    Sys.sleep(2)
    read_html(.x)
  })

10.2.3 Extracting the data of interest

The goal is to extract the text of the report, the date/time and the district the report refers to. Looking into the source code, we find that all reports are list items in an unordered list. Conveniently for us, all data fields we are interested in have distinct classes we can use in scraping. Date and time are enclosed by a <div> tag with the classes cell, nowrap and date. The report headline is also part of a <div>. Here the classes are cell and text. The same <div> also includes the district’s name, but we can discern between the two. The text is included in a <a> tag, that is a child of the <div> and the district is part of a <span> with the class category, which is also a child of the <div>. We can use this information to construct appropriate CSS selectors like this:

reports <- tibble(
  Date = pages %>% 
    map(html_elements, css = "div.cell.nowrap.date") %>% 
    map(html_text) %>% 
    unlist(),
  Report = pages %>% 
    map(html_elements, css = "div.cell.text > a") %>% 
    map(html_text) %>% 
    unlist(),
  District = pages %>% 
    map(html_elements, css = "div.cell.text > span.category") %>% 
    map(html_text) %>% 
    unlist()
)
## Error in `tibble()`:
## ! Tibble columns must have compatible
##   sizes.
## • Size 21952: Existing data.
## • Size 21525: Column `District`.
## ℹ Only values of size one are recycled.

That did not work. But why? Let us look at the error message we received. It informs us that the column “District” we tried to create, is shorter in length compared to the other two. Since we can not create tibbles out of columns with different lengths, we get this error.

The fact that the “District” column is shorter than the other two, must mean that there are police reports where the information on the district is not listed on the website, which we can confirm by browsing some of the subpages, e.g. https://www.berlin.de/polizei/polizeimeldungen/archiv/2014/.

10.2.3.1 Dealing with missing data in lists

Some of the <span> tags that contain the district are missing. Using the approach presented above, html_elements() just extracts the <span> tags that are present. We tell R that we do want all <span> tags of the class “category”, and this is what R returns to us. For list items where the tag is missing, nothing is returned. But this is not what we want; what we actually want is, that R looks at every single police report and saves its text, date and time, as well as the district, if it is not missing. If it is missing, we want R to save a NA in the cell of the tibble, the representation of missing values in R.

The <div> and <span> tags that contain the data of interest are nested in <li> tags in this case. The <li> tags thus contain the whole set of data, the text, the date/time and the district. The approach here is to make R examine every single <li> tag and extract the data that is present, as well as save an NA for every piece of data that is missing.

To start, we have to extract all the <li> tags and their content from the subpages. Right now, the subpages are saved in the object pages. We use a for loop that takes every element of pages, extracts all the <li> tags from it and adds them to a new list using append(). Note, that we have to use pages[[]] to subset the list of subpages, as we want to access the actual list elements, i.e. the node sets for the sub pages. As with tibbles, pages[] would always return another list. The <li> tags are all children of an <ul> with the class list--tablelist, which we can use in our selector. append() takes an existing object as its first argument and adds the data indicated in the values = argument to it. For this to work, we have to initiate the new list as an empty object before the loop starts.

list_items <- NULL

for (i in 1:length(pages)) {
  list_items <- append(list_items, values = html_elements(pages[[i]], css = "ul.list--tablelist > li"))
}

The newly created list list_items contains a node set for each <li> tag from all subpages. Again, we have to use double brackets to access the node set. With single brackets a new list containing the node set as its first element is returned, as illustrated here:

list_items[[1]]
## {html_node}
## <li>
## [1] <div class="cell nowrap date">16.10.2023 17:55 Uhr</div>\n
## [2] <div class="cell text">\n<a href="/polizei/polizeimeldungen/2023/pressemi ...

list_items[1]
## [[1]]
## {html_node}
## <li>
## [1] <div class="cell nowrap date">16.10.2023 17:55 Uhr</div>\n
## [2] <div class="cell text">\n<a href="/polizei/polizeimeldungen/2023/pressemi ...

We can now make R examine every element of this list one after the other and extract all the data they contain. But what happens when we try to extract a element that is not present in one of the elements? R returns an NA:

html_element(list_items[[1]], css = "span.notthere")
## {xml_missing}
## <NA>

Using a for loop that iterates over the elements in list_items, we write a tibble row by row, filling the data cells with the extracted information and with NA if a element could not be found for this element of list_items. We have to initiate the tibble before the for loop starts: we define the column names, the type of data to be saved and also the length of the columns. The latter is not strictly necessary as we could also have created a tibble with a column length of \(0\), but pre-defining their length increases computational efficiency. Still, the for loop has to iterate over several thousand elements and extract the data contained, which will take several minutes to complete.

reports <- tibble(
  Date = character()[1:length(list_items)],
  Report = character()[1:length(list_items)],
  District = character()[1:length(list_items)]
)

for (i in 1:length(list_items)) {
  reports[i, "Date"] <- html_element(list_items[[i]], css = "div.cell.nowrap.date") %>% 
    html_text()
  reports[i, "Report"] <- html_element(list_items[[i]], css = "div.cell.text > a") %>% 
    html_text()
  reports[i, "District"] <- html_element(list_items[[i]], css = "div.cell.text > span.category") %>% 
    html_text()
}

Let’s look at the tibble we just constructed:

reports
## # A tibble: 21,952 × 3
##    Date                 Report                                          District
##    <chr>                <chr>                                           <chr>   
##  1 16.10.2023 17:55 Uhr Verdacht eines Tötungsdeliktes - Mordkommissio… Ereigni…
##  2 16.10.2023 17:49 Uhr Gefährliche Körperverletzung durch Schuss - Du… Ereigni…
##  3 16.10.2023 14:43 Uhr Vorkommnisse im Zusammenhang mit dem Nahost-Ko… Ereigni…
##  4 16.10.2023 11:26 Uhr Festnahme nach räuberischem Diebstahl           Ereigni…
##  5 16.10.2023 10:07 Uhr Verletzte Radfahrerin in Krankenhaus verstorben Ereigni…
##  6 16.10.2023 09:53 Uhr Brandstiftung an mehreren Fahrzeugen            Ereigni…
##  7 15.10.2023 21:07 Uhr Verkehrsunfall mit Sonder- und Wegerechten      Ereigni…
##  8 15.10.2023 15:46 Uhr Raub im Juweliergeschäft                        Ereigni…
##  9 15.10.2023 14:59 Uhr Einkaufswagen aus Hochhaus geworfen - Mordkomm… Ereigni…
## 10 15.10.2023 14:47 Uhr Vorkommnisse im Zusammenhang mit den weltweite… Ereigni…
## # ℹ 21,942 more rows

This looks good, but we should also confirm, that NAs were handled correctly. We can examine the entry for “31.12.2014 13:21 Uhr” that we saw on https://www.berlin.de/polizei/polizeimeldungen/archiv/2014/, and for which the district was missing. We can use subsetting to just look at this one observation in our tibble. Remembet that when subsetting two dimensional objects like tibbles, we have to supply an index for the row(s) as well as for the column(s) we want to subset. Our goal is, to subset the row for which the column “Date” holds the value “31.12.2014 13:21 Uhr”. We can thus write our row index as reports$Date == "31.12.2014 13:21 Uhr", which reads as: the row(s) for which the value of the column “Date” in the object “reports” is equal to “31.12.2014 13:21 Uhr”. As we want to see all columns for this observation, we do not need to supply a column index. By writing nothing after the , we instruct R to show us all columns.

reports[reports$Date == "31.12.2014 13:21 Uhr", ]
## # A tibble: 1 × 3
##   Date                 Report                                           District
##   <chr>                <chr>                                            <chr>   
## 1 31.12.2014 13:21 Uhr Alkoholisiert geflüchtet und die Kontrolle verl… <NA>

This also looks good. We now have extracted the data we need to answer our questions.

10.2.4 Saving the data

As discussed in chapter 8, we save the scraped data at this point. You have seen that we downloaded a lot of subpages, which took a considerable amount of time; if we repeat this for every instance of further data analysis, we create a lot of unnecessary traffic and waste a lot of our own time.

save(reports, file = "reports.RData")

In the next chapter we will continue with cleaning the data, transforming it and calculating some descriptive statistics on it.