<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Data Management | Nalin Gadihoke</title><link>https://www.nalingadihoke.com/category/data-management/</link><atom:link href="https://www.nalingadihoke.com/category/data-management/index.xml" rel="self" type="application/rss+xml"/><description>Data Management</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© Nalin Gadihoke, 2020</copyright><lastBuildDate>Mon, 06 Jul 2020 00:00:00 +0000</lastBuildDate><item><title>Financial Impact of COVID-19</title><link>https://www.nalingadihoke.com/post/financial-impact-of-covid/</link><pubDate>Mon, 06 Jul 2020 00:00:00 +0000</pubDate><guid>https://www.nalingadihoke.com/post/financial-impact-of-covid/</guid><description>&lt;p>As part of Prof. Wang&amp;rsquo;s team, one of my first major tasks was to automate the extraction of bankruptcy data, for the duration of the pandemic, using a web scraper. Selenium Webdriver API and Beautiful Soup, both were used to scrape the &lt;a href="https://news.bloomberglaw.com/" target="_blank" rel="noopener">Bloomberg Law&lt;/a> website for Chapter 11, 13, 12 and 7 bankruptcy data. My primary work now is on data management using dask for parallel computing to index millions of transaction data points in pandas.&lt;/p>
&lt;p>Since it is ongoing research I can&amp;rsquo;t go into more detail; check out my other &lt;a href="https://www.nalingadihoke.com/#projects" target="_blank" rel="noopener">projects&lt;/a>.&lt;/p></description></item></channel></rss>