homepage: https://covh.github.io/cov19de/

No reading, just clicking? Then start here: Deutschland.html

- in
**comparison to other regions**, ranked via a variation of the*conditional formatting table*of Tomas Pueyo in his Coronavirus: Learning How to Dance --> chart 3

- as a
**time series plot**of observables over time (cumulative total cases, daily new cases, smoothed with a simple moving average).

All is based on the excellent crowdsourced data by RiskLayer.com,
a research group in Karlsruhe. My instructions how to import that
data - *now runs in the browser ** !
*

- download
as ZIP file (35 MB) - advantage: Flipping through the >400
images is really fast, see folder pics/
, OR:

- 1 + 16 pages:

- aggregated into one plot = Germany
OR

- sorted by "
**expectation day**"

- by Bundesland (16
pages) OR

- all of Germany's 401 districts
("Kreise")

- you can also sort the tables by: incidence / prevalence /
reproduction factor /...

- experimental /
*with their neighbours*.

**Everything on this site could just be wrong!** Do not base
any decisions on this. Always do your own calculations.

If in doubt, check official sources, for example RKI.de
and BundesGesundheitsMinisterium.de
and WHO.int .

It has been a "quick and dirty" hack ... to put together quite a large
site, in minimal time. There might be errors & bugs.

Please: If you see anything here that raises your suspicion, please do
alert me. Just raise an issue on github. Thanks.

The *data quality *in Germany has a clear flaw: It fluctuates in a*
weekly rhythm* (best to see e.g. in the GRAY wavy curve in the Germany
plot) with Thursdays ~twice as many new cases as Sundays. As that mostly
*delays* the reporting (even though in mild cases it might also lead
to some *unreported* cases?), the total number of cases x-days-later
will not be (much) affected by that. But the momentary situation "today"
or "yesterday" is quite unclear. One workaround to minimize that
disturbance comes in two steps: (1) averaging of the cumulative total
number of cases over the past 7 days = add up all 7 values, and divide by
7.0, and then (2) shifting that result to the left, by 3 days, because
that is where the "center" of that 7 days average is sitting. For this
step (1) "averaging" there are actually many choices how to do it, see
e.g. this wikipedia page - for now we are choosing a central "simple moving average".

- plots: in
*most*plots you can see that the**7 day average**(**purple**) is well smoothed already, no need for a 14 days window (**orange**).

- tables: the
*background colors*of the table cells:*first*the cumulative total cases is smoothed, with 7-days-window, and centered;*then*a "synthetic daily new cases" is calculated*from that*, and used as the coloring scalar between 0 and the max occurring number*in that row*.

- plots: the GRAY wavy line is the (real, reported, raw, unaveraged) daily number of new cases
- tables: the numbers themselves, in the colored table cells, are the (real, reported, raw, unaveraged) total number of cases until that day.
- the following "expectation day", see (A2), is calculated from the
(real, raw, reported, unaveraged) input data of total cases, by taking a
difference of two consecutive total cases, to get the
**daily_cases(daynumber)**which is the input sequence for:

*This is used in two places --> *The **GREEN**
triangle in each *plot *marks that day. And all comparison heatmap
*tables* are sorted by that 'expectation day' column.

*What is it? --> *(At least until 2nd waves are happening
...) a good proxy for **how relatively dramatic the situation
still is in a certain region**, is what we call the

expectationday = sum_over_all_daynumbers [ daynumber * daily_cases(daynumber) ] / total_cases

with

**total_cases **= sum_over_all_daynumbers [ daily_cases(daynumber) ]

**daynumber **= 0 is the first day for which we have data (and
incrementing for each later day), and

**daily(daynumber) **= the number of additional cases per daynumber
(note that for the very first day (the day with daynumber = 0) that is
undefined.)

in other words, the "expectation day" is: the * average day,
weighted with the number of new cases for each day = *so we get an
"expectation value" for the day

Now all **tables can be SORTED** by specific columns, when clicking
the column title text (The large table can take ~30 seconds to be sorted.
Please be patient. The yellow color disappears when the sorting is
finished. Enable Javascript for this work). Now -with this new
sorting option- it makes sense to add* more aggregating measures.*
Please make suggestions which columns I can try out. Thanks. Some first
idea already included:

**Reff_4_7**= quotient of newest daily cases smoothed, and 4 days ago daily cases smoothed- -->
**effective****reproduction number****R_eff**estimate. Assumption: One infection generation is 4 days long on average. - Same as RKI method described
in this article but (4 days smoothing was leading to shaky
results, so) here
*smoothed**over 7 days*.

- It can happen that R becomes NaN (not a number) when the days -10 to -4 had been ZERO, because then the denominator is zero --> i.e. you cannot really calculate R during the beginning of an outbreak.
**last 14 days new cases**total --> absolute number, and**"****incidence"**= divided by population- also sort by: population, prevalence (total cases per 1 million people), Region name, expectation day - see (A2)

- TU Dortmund AI department in Computer science - a forecast model for each district (401 out of 401)
- Wikipedia Kreis (district) and KreisSitz (capital city) pages are now linked where available (294 out of 401)
- risklayer master sheet --> sources --> now added as [1], [2], [3] below each district plot

You find those as links in the "other sites" section below each "Kreis" plot. Please tell me about more Covid19 related projects on the Kreis level. Thanks.

- Dr Andreas Krüger, see twitter.com/drandreaskruger (without Umlaut).
- Project is born out of curiosity, love for maths & coding - and blissful states while creating it :-)
- Funding is: Self-financed, i.e. no funding whatsoever. Money is not what is driving me. Still, feel free to support my work:

- Of course
**money**is appreciated too: - [BTC] 3Km23oagxEnyt9tJoSyjzns3qGQ6hWSfho
- thank you unknown supporter, the 1st donation came in on 2020-05-11 08:47
- [ETH] 0x8b70F93D1858C3e06F8703Aa975CB95121519259

[DASH] XfJscL28YdmaX5tmTFrusehAMAYHNYs3qN - This coin is actually
*not ProofOfWork*, so that it won't use as much electricity as the above:

[NEM] NCR32VKG5VAYNNJECNWQHI6V4Z6VZOWDLCVJWMKA

- CreditCard/Paypal
via my Github sponsors page

- Most importantly, please give it
**attention =**use your social media influence, retweet, blog about it, share screenshots ... thank you very much!

- Here are some
testimonials now.

Please you also retweet, thanks:

- Most English announcement tweets are in a thread
- Some tweets also in German: whole new thread with all German language tweets.
- Please retweet, to spread the word about this project
- thanks.

- News site
**article**(German) on**heise.de**- tweet , article

**"CoronaVirus: Landkreise brauchen nun Aufmerksamkeit"**

- most of my initial
forum answers are linked here

- You suggest what else could be added, or report typos, or calculation
errors

- --> please raise an issue here. Thanks.
- there is also a list
of TODOs and a history.txt
and a log.txt
file of a recent pics and pages generation

- January 28th - April 6th: thread#1
- April 7th thread#2 begins by summarizing thread#1, and the very first days in an outbreak.

Plenty of data, exponential fits, virus information, news articles,
politics, opinion, etc - A good recording of what happened on the
timeline.

homepage: https://covh.github.io/cov19de/