The mission of the Asian American CV19 Archive Project is to document and keep a historical record of bias, xenophobia, displacement, and injustice, against the Asian American and Asian community during the Covid-19 pandemic, and to share that data with all communities via open source technologies, where no single entity or individual owns the historical archive, helping to ensure that it can never be erased. In order to accomplish this mission the main goals of the archive project are as follows:

How is data collected? Data (articles, posts, and associated media) is collected via three main methods:

How is the decision made of what content is entered into the archive?

Are there multiple entries for one topic?

Why don't all entries contain an image?

Why don't all entries contain an HTML or PDF archive?

Why does the HTML or PDF archive look different than the original source document?

How come some articles have a PDF link but there is no PDF found?

When is data archived?

How long does the archive go back with data?

Don't the sources own their data? How can others "own" the data as well?

If searching in the "Data View" does it search the full html archive?

Why do some Site values have a organization name and others have a top-level url?

While the current site has both data and a user interface for consumption by the general community, in order to share all the data from the archive, GitHub is used as a way to download, copy, and make completely new repositories of data.

What is GitHub? Isn't this more for software development and code?

What data, and what format is the data available in via the GitHub repository?

Please see the GitHub repository for more information on data, formats, and retrieval of the archive.

What is the url to the GitHub repository?

When is data updated in the GitHub repository?