CSci 39542 Syllabus    Resources    Coursework



Coursework
CSci 39542: Introduction to Data Science
Department of Computer Science
Hunter College, City University of New York
Fall 2021


Quizzes    Homework    Project   

Quizzes

Unless otherwise noted, quizzes are available on Blackboard for the 24 hours after lecture. Blackboard quizzes are 15 minutes long and can be repeated up to the deadline. The highest score earned on a Blackboard quiz will be reported. Blackboard access is generated automatically from the registrar. See the ICIT Blackboard page for resources and tutorials for using the system.

Five of the quizzes assess your programming skill using HackerRank. These quizzes are 30 minutes long and cannot be repeated. Links will be available on Blackboard to access the quiz.

There are no make-up quizzes. Instead, your score on the final exam will replace missing quiz grades (the final exam will also replace a quiz grade when you take the quiz but do better on the final exam). See the syllabus for additional information on how grades are calculated.

Quiz 1: Due 4pm, Friday, 27 August. The first quiz asks that you confirm that you have read the Hunter College's Academic Integrity Policy:

Hunter College regards acts of academic dishonesty (e.g., plagiarism, cheating on examinations, obtaining unfair advantage, and falsification of records and official documents) as serious offenses against the values of intellectual honesty. The College is committed to enforcing the CUNY Policy on Academic Integrity and will pursue cases of academic dishonesty according to the Hunter College Academic Integrity Procedures.

Quiz 2: Due 4pm, Tuesday, 31 August.   The second quiz focuses on the Python Recap: basics and standard packages (pandas, numpy, matplotlib, & seaborn), zips, and list comprehensions from Lecture 1.

Quiz 3: Due 4pm, Friday, 3 September.  The quiz covers data sampling from the third lecture and the reading: DS 100: Chapter 2 (Theory for Data Design) and includes Python review questions.

Quiz 4: Due 4pm, Friday, 10 September.  The quiz covers Python string methods and Python data types from the second and third lectures and the reading: DS 100: Section 13.1 (Python String Methods) and subsetting dataframes from DS 100: Chapter 7 (Data Tables in Python).

Quiz 5: Due 4pm, Tuesday, 14 September.  This is a coding quiz on HackerRank focusing on the Python constructs and package from the first two weeks. You will be sent an invitation to the email you use for Gradescope for this quiz. You have 30 minutes to complete the quiz, and the quiz cannot be repeated.

Quiz 6: Due 4pm, Tuesday, 21 September.  Today's topic is regular expressions from Lecture #3 and DS 100: Sections 13.2-13.3 (Regular Expressions).

Quiz 7: Due 4pm, Friday, 24 September.  This quiz covers SQL from Lectures #4, 5 & 6 and DS 100: Chapter 7 (Relational Databases & SQL).

Quiz 8: Due 4pm, Tuesday, 28 September.  This quiz covers DataFrames from Pandas, covered in Lectures #4, 5 & 6 and DS 100: Chapter 6 (Data Tables in Python).

Quiz 9: Due 4pm, Friday, 1 October.  The focus is functions in Python, covered in the code demos in Lectures #5 and #6.

Quiz 10: Due 4pm, Tuesday, 5 October.  This is a coding quiz on HackerRank. You will be sent an invitation to the email you use for Gradescope for this quiz. You have 30 minutes to complete the quiz, and the quiz cannot be repeated.

Quiz 11: Due 4pm, Friday, 8 October.  The focus in on data visualiation as discussed in Lectures #7, #8 and #9 and DS 100: Chapter 11 (Data Visualization).

Quiz 12: Due 4pm, Friday, 15 October.  The quiz covers loss functions, correlation, and regression from Lectures #10, #11 & #12 and the reading: DS 100, Sections 4.2-4.4 (Loss Functions).

Quiz 13: Due 4pm, Tuesday, 19 October.  The focus of this quiz is sampling distributions and the Central Limit Theorem, covered in Lectures #11-12, DS 8: Chapter 9 (Randomness), and DS 100: Chapter 16 (Probability & Generalization).

Quiz 14: Due 4pm, Friday, 22 October.  The quiz covers linear models, ordinary least squares, and gradient descent from Lectures #11-13 and the reading: DS 100: Chapter 17 (Gradient Descent).

Quiz 15: Due 4pm, Tuesday, 26 October.  This is a coding quiz on HackerRank focusing on regular expressions. A link to access the quiz is available on Blackboard 24 hours before the due date. You have 30 minutes to complete the quiz, and the quiz cannot be repeated.

Quiz 16: Due 4pm, Friday, 29 October.  The quiz reviews topics from the first 15 lectures.

Quiz 17: Due 4pm, Tuesday, 2 November.  The quiz covers the feature engineering from Lectures #15-16 and the reading: DS 100: Chapter 20 (Feature Engineering).

Quiz 18: Due 4pm, Friday, 5 November.  The quiz covers the logistic model from Lectures #17-18 and the reading: DS 100: Chapter 20 (Classification).

Quiz 19: Due 4pm, Tuesday, 9 November.  The quiz covers the logistic regression from Lectures #17-19 and the reading: DS 100: Chapter 20 (Classification).

Quiz 20: Due 4pm, Friday, 12 November.  This is a coding quiz on HackerRank. A link to access the quiz is available on Blackboard 24 hours before the due date. You have 30 minutes to complete the quiz, and the quiz cannot be repeated. You have 30 minutes to complete the quiz, and the quiz cannot be repeated.

Quiz 21: Due 4pm, Tuesday, 16 November.  The quiz covers classification from Lectures #17-21 and the reading: DS 100: Chapter 20 (Classification) and Python DS Handbook Chapter 5 (SVMs).

Quiz 22: Due 4pm, Friday, 19 November.  The quiz covers the linear algebra review from Lecture #21.

Quiz 23: Due 4pm, Tuesday, 23 October.The quiz focuses on Principal Components Analysis from Lectures #21-23 and the reading: Python Data Science Handbook: Section 5.9 (PCA). Since we did not cover all the material yet, this quiz will now be review topics from Lectures 1-20.

Quiz 24: Due 4pm, Tuesday, 30 November.  The quiz covers multidimensional scaling and dimensionality reduction from Lectures #22-23 and the reading: Manifold Learning (sklearn).

Quiz 25: Due 4pm, Friday, 3 December.  This is a coding quiz on HackerRank. You will be sent an invitation to the email you use for Gradescope for this quiz. You have 30 minutes to complete the quiz, and the quiz cannot be repeated.

Quiz 26: Due 4pm, Tuesday, 7 December.  The quiz covers the K-Means Clustering from Lectures #24-26 and the reading: DS 100: Chapter 28 (clustering) and Python Data Science Handbook: Section 5.9 (K-Means).

Quiz 27: Due 4pm, Friday, 9 December.  The quiz reviews topics from the first 26 lectures.

Quiz 28: Due 4pm, Tuesday, 14 December.  The last quiz is an end of the semester survey.






Homework

Unless otherwise noted, programs are submitted on the course's Gradescope site and are written in Python. Also, to receive full credit, the code should be compatible with Python 3.6 (the default for the Gradescope autograders).

All students registered by Monday, 23 August were sent a registration invitation to the email on record on their Blackboard account. If you did not receive the email or would like to use a different account, post to Help::Individual Questions (on the left hand menu when logged into the course site on Blackboard). Include in your post that you not receive a Gradescope invitation, your preferred email, and we will manually generate an invitation. As a default, we use your name as it appears in Blackboard/CUNYFirst (to update CUNYFirst, see changing your personal information). If you prefer a different name for Gradescope, include it in your post, and we will update the Gradescope registration.

To get full credit for a program, the file must include in the opening comment:

For example, for the student, Thomas Hunter, the opening comment of his first program might be:

"""
Name:  Thomas Hunter
Email: thomas.hunter.1870@hunter.cuny.edu
Resources:  Used python.org as a reminder of Python 3 print statements.
"""
and then followed by his Python program.



Set 1: The first set of programs recaps familiar Python constructs and packages. None are challenging, instead, their purpose is as review and to ensure your Python IDE is functional, has the basic libraries and that you can submit programs to Gradescope.

Program 1: Hello, world.Due noon, Friday, 27 August.
(Learning Objective: students are able to use a Python IDE on their computer and successfully submit the work to the Gradescope system.)

Submit a Python program that prints: Hello, world

Program 2: Senators' Names.Due noon, Monday, 30 August.
(Learning Objective: students can successfully read and write CSV files and use the Pandas package to select rows, filtered by boolean expressions.)

Write a program, using the pandas package, that asks the user for the name of an input CSV file and the name of an output CSV file. The program should open the file name provided by the user. Next, the program should select rows where the field senate_class is non-empty and write the first_name and last_name to a file with the output file name provided by the user.

For example, if the file was legislators-current.csv with the first 3 lines of:


last_name,first_name,middle_name,suffix,nickname,full_name,birthday,gender,type,state,district,senate_class,party,url,address,phone,contact_form,rss_url,twitter,facebook,youtube,youtube_id,bioguide_id,thomas_id,opensecrets_id,lis_id,fec_ids,cspan_id,govtrack_id,votesmart_id,ballotpedia_id,washington_post_id,icpsr_id,wikipedia_id
Brown,Sherrod,,,,Sherrod Brown,1952-11-09,M,sen,OH,,1,Democrat,https://www.brown.senate.gov,503 Hart Senate Office Building Washington DC 20510,202-224-2315,http://www.brown.senate.gov/contact/,http://www.brown.senate.gov/rss/feeds/?type=all&,SenSherrodBrown,SenatorSherrodBrown,SherrodBrownOhio,UCgy8jfERh-t_ixkKKoCmglQ,B000944,00136,N00003535,S307,"H2OH13033,S6OH00163",5051,400050,27018,Sherrod Brown,,29389,Sherrod Brown
Cantwell,Maria,,,,Maria Cantwell,1958-10-13,F,sen,WA,,1,Democrat,https://www.cantwell.senate.gov,511 Hart Senate Office Building Washington DC 20510,202-224-3441,http://www.cantwell.senate.gov/public/index.cfm/email-maria,http://www.cantwell.senate.gov/public/index.cfm/rss/feed,SenatorCantwell,senatorcantwell,SenatorCantwell,UCN52UDqKgvHRk39ncySrIMw,C000127,00172,N00007836,S275,"S8WA00194,H2WA01054",26137,300018,27122,Maria Cantwell,,39310,Maria Cantwell
Then a sample run of the program:
Enter input file name: legislators-current.csv
Enter output file name:  senatorNames.csv
And the first three lines of senatorNames.csv would be:

first_name,last_name
Sherrod,Brown
Maria,Cantwell
Note: if you use the legislators CSV file above, your output file should have 101 lines: 1 line of header information and 100 rows of data.

Program 3: Senators' Ages.Due noon, Wednesday, 1 September.
(Learning Objective: to refresh students' knowledge of Pandas' functionality to create new columns from existing columns of formatted data.)

Write a program that asks the user for the name of an input CSV file and the name of an output CSV file. The program should open the file name provided by the user. Next, the program should select rows where the field senate_class is non-empty and write the first_name and compute the age based on the birthday field as of the first of the year. Your program should write out a new CSV file (with the name provided by the user) with the two columns: first_name and age.

For example, if the file was legislators-current.csv with the first 3 lines of:


last_name,first_name,middle_name,suffix,nickname,full_name,birthday,gender,type,state,district,senate_class,party,url,address,phone,contact_form,rss_url,twitter,facebook,youtube,youtube_id,bioguide_id,thomas_id,opensecrets_id,lis_id,fec_ids,cspan_id,govtrack_id,votesmart_id,ballotpedia_id,washington_post_id,icpsr_id,wikipedia_id
Brown,Sherrod,,,,Sherrod Brown,1952-11-09,M,sen,OH,,1,Democrat,https://www.brown.senate.gov,503 Hart Senate Office Building Washington DC 20510,202-224-2315,http://www.brown.senate.gov/contact/,http://www.brown.senate.gov/rss/feeds/?type=all&,SenSherrodBrown,SenatorSherrodBrown,SherrodBrownOhio,UCgy8jfERh-t_ixkKKoCmglQ,B000944,00136,N00003535,S307,"H2OH13033,S6OH00163",5051,400050,27018,Sherrod Brown,,29389,Sherrod Brown
Cantwell,Maria,,,,Maria Cantwell,1958-10-13,F,sen,WA,,1,Democrat,https://www.cantwell.senate.gov,511 Hart Senate Office Building Washington DC 20510,202-224-3441,http://www.cantwell.senate.gov/public/index.cfm/email-maria,http://www.cantwell.senate.gov/public/index.cfm/rss/feed,SenatorCantwell,senatorcantwell,SenatorCantwell,UCN52UDqKgvHRk39ncySrIMw,C000127,00172,N00007836,S275,"S8WA00194,H2WA01054",26137,300018,27122,Maria Cantwell,,39310,Maria Cantwell
Then a sample run of the program:
Enter input file name: legislators-current.csv
Enter output file name:  senatorAge.csv
And the first three lines of senatorAge.csv would be:

first_name,age
Sherrod,68
Maria,62
since that was their ages as of the start of the year: January 1, 2021.

Note: if you use the legislators CSV file above, your output file should have 101 lines: 1 line of header information and 100 rows of data.

Program 4: ELA Proficiency.Due noon, Thursday, 2 September.
(Learning Objective: students can successfully filter formatted data using standard Pandas operations for selecting data.)

Write a program that asks the user for the name of an input CSV file and the name of an output CSV file. The program should open the file name provided by the user. Next, the program should select rows where the field Grade is equal to 3 and the Year is equal to 2019 and write all rows that match that criteria to a new CSV file.

Then a sample run of the program:

Enter input file name: school-ela-results-2013-2019.csv
Enter output file name:  ela2013.csv
where the file school-ela-results-2013-2019.csv is extracted from NYC Schools Test Results (and truncated version of roughly the first 1000 lines for testing). The first lines of the output file would be:

School,Name,Grade,Year,Category,Number Tested,Mean Scale Score,# Level 1,% Level 1,# Level 2,% Level 2,# Level 3,% Level 3,# Level 4,% Level 4,# Level 3+4,% Level 3+4
01M015,P.S. 015 ROBERTO CLEMENTE,3,2019,All Students,27,606,1,3.7,7,25.9,18,66.7,1,3.7,19,70.4
01M019, P.S. 019 ASHER LEVY,3,2019,All Students,24,606,0,0.0,8,33.3,15,62.5,1,4.2,16,66.7
01M020,P.S. 020 ANNA SILVER,3,2019,All Students,57,593,13,22.8,24,42.1,18,31.6,2,3.5,20,35.1




Set 2: The second set of programs focuses on incorporating and analyzing rectangular data, in terms of relational databases and data frames. The goal is familiarity with these canonical representations to use as building blocks for future analysis, programs, and your project.

Program 5: URL Collection.Due noon, Friday, 3 September.
(Learning Objective: to use regular expressions with simple patterns to filter column data in a canonical example: scraping a website of URL's.)

Write a program that asks the user for the name of an input HTML file and the name of an output CSV file. Your program should use regular expressions (see Chapter 12.4 for using the re package in Python) to find all links in the input file and store the link text and URL as columns: Title and URL in the CSV file specified by the user. For the URL, strip off the leading https:// or http:// and any trailing slashes (/):

For example, if the input file is:


  <html>
  <head><title>Simple HTML File</title></head>

  <body>
    <p> Here's a link for <a href="http://www.hunter.cuny.edu/csci">Hunter CS Department</a>
    and for <a href="https://stjohn.github.io/teaching/data/fall21/index.html">CSci 39542</a>.  </p>

    <p> And for <a href="https://www.google.com/">google</a>
  </body>
  </html>
Then a sample run of the program:
Enter input file name: simple.html
Enter output file name:  links.csv
And the links.csv would be:

Title,URL
Hunter CS Department,www.hunter.cuny.edu/csci
CSci 39542,stjohn.github.io/teaching/data/fall21/index.html
google,www.google.com


Program 6 is cancelled. See announcement on Blackboard.

Program 6: Regex on Restaurant Inspection Data.Due noon, Thursday, 9 September.
(Learning Objective: The two learning objectives of this exercise are a) to give the students an opportunity to practice their newfound regular expressions (regex) skills and b) familiarize them with the restaurant inspection dataset which would be used again in the latter SQL programs.)

Use regular expressions (covered in Lecture 3 & DS 100: Sections 12.2-3) to clean restaurant inspection datasets that we will use in later SQL programs. Your program should:

For example, if the file was restaurants30July.csv with the first 3 lines of:


CAMIS,DBA,BORO,BUILDING,STREET,ZIPCODE,PHONE,CUISINE DESCRIPTION,INSPECTION DATE,ACTION,VIOLATION CODE,VIOLATION DESCRIPTION,CRITICAL FLAG,SCORE,GRADE,GRADE DATE,RECORD DATE,INSPECTION TYPE,Latitude,Longitude,Community Board,Council District,Census Tract,BIN,BBL,NTA
41178124,CAFE 57,Manhattan,300,WEST   57 STREET,10019,2126492729,American,7/30/2021,Violations were cited in the following area(s).,09C,Food contact surface not properly maintained.,Not Critical,4,A,7/30/2021,8/1/2021,Cycle Inspection / Initial Inspection,40.76643902,-73.98332508,104,3,13900,1025451,1010477502,MN15
50111450,CASTLE CHICKEN,Bronx,5987A,BROADWAY,10471,9178562047,Chicken,7/30/2021,Violations were cited in the following area(s).,05D,Hand washing facility not provided in or near food preparation area and toilet room. Hot and cold running water at adequate pressure to enable cleanliness of employees not provided at facility. Soap and an acceptable hand-drying device not provided.,Critical,41,N,,8/1/2021,Pre-permit (Operational) / Initial Inspection,40.88993027,-73.89805316,208,11,28500,2084208,2058011033,BX29
40699339,NICK GARDEN COFFEE SHOP,Bronx,2953,WEBSTER AVENUE,10458,7183652277,Coffee/Tea,7/30/2021,Violations were cited in the following area(s).,08A,Facility not vermin proof. Harborage or conditions conducive to attracting vermin to the premises and/or allowing vermin to exist.,Not Critical,31,,,8/1/2021,Cycle Inspection / Initial Inspection,40.86759042,-73.88308647,207,11,41500,2016446,2032800061,BX05
Then a sample run of the program:

Enter input file name: restaurants30July.csv
Enter output file name:  july30filtered.csv
  
And the first three lines of july30filtered.csv would be:

CAMIS,DBA,BORO,BUILDING,STREET,ZIPCODE,PHONE,CUISINE DESCRIPTION,INSPECTION DATE,ACTION,VIOLATION CODE,VIOLATION DESCRIPTION,CRITICAL FLAG,SCORE,GRADE,GRADE DATE,RECORD DATE,INSPECTION TYPE,Latitude,Longitude,Community Board,Council District,Census Tract,BIN,BBL,NTA,restaurant_name,thai_boolean
41178124,CAFE 57,Manhattan,300,WEST  57 STREET,10019,+1-212-649-2729,American,2021/07/30,Violations were cited in the following area(s).,09C,Food contact surface not properly maintained.,Not Critical,4,A,7/30/2021,8/1/2021,Cycle Inspection / Initial Inspection,40.76643902,-73.98332508,104,3,13900,1025451,1010477502,MN15,Cafe 57 ,False
50111450,CASTLE CHICKEN,Bronx,5987A,BROADWAY,10471,+1-917-856-2047,Chicken,2021/07/30,Violations were cited in the following area(s).,05D,Hand washing facility not provided in or near food preparation area and toilet room. Hot and cold running water at adequate pressure to enable cleanliness of employees not provided at facility. Soap and an acceptable hand-drying device not provided.,Critical,41,N,,8/1/2021,Pre-permit (Operational) / Initial Inspection,40.88993027,-73.89805316,208,11,28500,2084208,2058011033,BX29,Castle Chicken ,False
40699339,NICK GARDEN COFFEE SHOP,Bronx,2953,WEBSTER AVENUE,10458,+1-718-365-2277,Coffee/Tea,2021/07/30,Violations were cited in the following area(s).,08A,Facility not vermin proof. Harborage or conditions conducive to attracting vermin to the premises and/or allowing vermin to exist.,Not Critical,31,,,8/1/2021,Cycle Inspection / Initial Inspection,40.86759042,-73.88308647,207,11,41500,2016446,2032800061,BX05,Nick Garden Coffee Shop ,False

Program 7: Neighborhood Tabulation AreasDue noon, Friday, 10 September.
(Learning Objective: The learning objective of this exercise is to give the students an opportunity to practice their newfound SQL skills.)

The package pandasql provides an easy way to use SQL queries directly on a Pandas DataFrame. (You may need to install it in your hierarchy (e.g. pip install pandasql or pip install pandasql).

Once installed, you can run queries via the function sqldf(queryName). For example, you could filter for all students in the roster.csv on the waitlist by:


import pandas as pd
import pandasql as psql
roster = pd.read_csv('roster.csv')

q = 'SELECT * FROM roster WHERE Role = "Waitlist Student"'
waitList = psql.sqldf(q)

print(waitList)

For this program, ask the user for the input and output file names. You should assume that the input file contains the New York City Neighborhood Tabulation Areas such as nynta.csv. Use sqldf(queryName) to filter the dataset to return the NTACode and NTAName columns, labeled as NTA and NTA_Name, respectively. You should save the result as a CSV in the output file named by the user.

Program 8: Restaurant SQL Queries.Due noon, Monday, 13 September.

Your program should ask for the input file name (must include .csv) and then for an output file prefix (must not include any extension). For example, with restaurantJuly2020.csv for the input and selected for the output prefix. The program should create 4 files: selectedA.csv, selected70.csv, selectedZIP.csv, and selectedAll.csv.

Using SQL (see DS 100: Section 6.2), extract the following information from a restaurant inspection dataset (a small file of inspections from 30 July is available: restaurants30July.csv):

Note: The file extension names are case-sensitive, so, the autograder will not except ... ALL.csv for ... All.csv.

Program 9: Aggregating Restaurant Data (SQL). Due noon, Tuesday, 14 September.
(Learning Objective: The learning objective of this exercise is to give the students an opportunity to practice more advanced SQL skills (e.g. GROUP BY's) on a familiar dataset.)

Using the more advanced SQL commands from DS 100: Section 5.1 (e.g. GROUP BY's), this program find distinct restaurant names and distinct cuisines by locale. For testing, a small file of inspections from 1 August is available: brooklynJuly2021.csv.

Your program should ask for the input file name (must include .csv) and then for an output file prefix (must not include any extension).

For example, if you entered brooklynJuly2021.csv and selected for the output prefix, the program should create 4 files: selectedRestaurants.csv, selectedCuisines11224.csv, selectedCounts11224.csv, and selectedBoro.csv. The first several lines of selectedRestaurants.csv are:

DBA
1 HOTEL BROOKLYN BRIDGE
14 OLD FULTON STREET
98K
99 CENT PIZZA
ABURI SUSHI BAR

The file selectedCuisines11224.csv is:

cnt
American
(since our test file only has restaurants that serve American food in the 11224 zipcode)

The file selectedCounts11224.csv is:

CUISINE DESCRIPTION,COUNT(DISTINCT DBA)
American,3

The file selectedBoro.csv is:

borough,cnt_cuisine,cnt_restaurants
Brooklyn,50,384

Program 10: Extracting Districts.Due noon, Monday, 20 September.
(Learning Objective: successfully write and apply functions to DataFrames to clean data.)

Write a program that asks the user for the name of an input CSV file and the name of an output CSV file. The program should open the file name provided by the user. Your program should include a function, extractDistrict() that takes a string as an input and returns the number represented by the first two characters in the string:


    def extractDistrict(name):
    '''
    Extracts the district (first two characters) as an integer.
    Input:  Character string containing district + school num (e.g. "01M015")
    Returns:  The first characters as an integer (e.g. 1)
    '''

      #### Your code goes here ####
    
    
Your program should apply the function to each row that takes the first two characters of the School field, converts those into a digit, and stores the results in a new column, District. That is,
df['District'] = df['DBN'].apply(extractDistrict)

For example, if the School is "01M015", the entry in the new column would be 1 (stored as a number, not a string).

The results should be written to a new CSV file, name provided by the user.

Program 11: Joining Restaurant & NTA Data.Due noon, Tuesday, 21 September.
(Learning Objective: The exercises in this program will build up to help students conceptualize and finally create a JOIN between the health inspection table and the NTA table. This is to reinforce the learning done in the last 2 SQL lectures.)

For testing, a small file of inspections from 30 July is available: restaurants30July.csv and the Neighborhood Tabulation Areas (NTA): nta.csv.

Your program should ask for two input file name (must include .csv) and then for an output file prefix (must not include any extension). For example, with restaurantJuly2020.csv and nta.csv for the input and selected for the output prefix. The program should create 6 files: selected1.csv, selected2.csv, selected3.csv, selected4.csv, selected5.csv, and selected6.csv.

  1. Save the NTA column from the restaurant inspection table to the output file prefix+"1.csv" where prefix holds the value specified by the user.
  2. Save the count of unique NTAs in the restaurant health inspection table to the output file prefix+"2.csv" where prefix holds the value specified by the user. (Note this will have a single column and a single value.)
  3. Save the NTA column and the count of the distinct restaurants from the restaurant inspection table to the output file prefix+"3.csv" where prefix holds the value specified by the user. (Hint: how can you use GROUP BY to organize the output?)
  4. Save the number of rows in the NTA table and the number of unique NTAs in the NTA table to the output file prefix+"4.csv" where prefix holds the value specified by the user. (Note this will have a two rows and two columns.)
  5. Save the names of the restaurant and its NTA which can be found via a LEFT JOIN of the restaurant inspection table and NTA table. Save the results to the output file prefix+"5.csv" where prefix holds the value specified by the user. (Hint: join on the NTA code found in both (but using slightly different names). Your output should have two columns.)
  6. Building on the result from 5) above, keep the LEFT JOIN as is, do one more level of aggregation, so that the end result contains 3 columns (unique NTA code, unique NTA description, and the count distinct restaurants as grouped by the first 2 columns). Save result to the output file prefix+"6.csv" where prefix holds the value specified by the user.

Program 12: MTA Ridership.Due noon, Thursday, 23 September.
(Learning Objective: to reinforce Pandas skills via use for data aggregating and data cleaning.)

In the next lecture, we will be summarizing time-series data and using a cleaned version of MTA subway and bus ridership, inspired by Oldenburg's NYC Transit Turnstile Data.

Write a program that asks the user for the name of an input CSV file and the name of an output CSV file. The program should open the file name provided by the user, which you can assume will include the column names: date, entries, and exit. You should create a new file that has one entry for each date that consists of the sum of all entries and the sum of all exits that occur on the date. This aggegrate data should be stored in the output CSV and should contain only the three columns: date, entries, and exits, even if there are additional columns in the input CSV file.

For example, if the file was the 2020 data for Staten Island, rmta_trunc_staten_island.csv with the first 3 lines of:

stop_name,daytime_routes,division,line,borough,structure,gtfs_longitude,gtfs_latitude,complex_id,date,entries,exits
St George,SIR,SIR,Staten Island,SI,Open Cut,-74.073643,40.643748,501,2020-01-01,2929,0
St George,SIR,SIR,Staten Island,SI,Open Cut,-74.073643,40.643748,501,2020-01-02,13073,0
St George,SIR,SIR,Staten Island,SI,Open Cut,-74.073643,40.643748,501,2020-01-03,11857,23
Then a sample run of the program:
Enter input file name: mta_trunc_staten_island.csv
Enter output file name:  filteredSI.csv
And the first three lines of filteredSI.csv would be:
date,entries,exits
2020-01-01,3128,0
2020-01-02,13707,0
2020-01-03,12507,23




Set 3: The third set of programs integrates visualization techniques with analyzing structured data sets. While the programs do not cover every visualization technique, the practice these programs provide will be directly relevant to your project.

Program 13: Column Summaries.Due noon, Friday, 24 September.
(Learning Objective: to strengthen function-writing skills and examine alternate ways to summarize time-series data.)

In lecture, we used the Pandas' function, rolling() to compute a 7-day average of subway ridership for the visualization of ridership in 2020. For this program, write three functions that take as input a Pandas' series (e.g. a column of a DataFrame) that highlights different patterns in the data:

Note: you should submit a file with only the standard comments at the top, and these three functions. The grading scripts will then import the file for testing.

Program 14: Library Cleaning.Due noon, Monday, 27 September.
(Learning Objective: to strengthen data processing skills using regular expressions and standard string methods.)

Write two functions that will be used to clean the OpenData NYC dataset of Libraries in New York City (downloaded as CSV file). The first three lines of the CSV file look like:


the_geom,NAME,STREETNAME,HOUSENUM,CITY,ZIP,URL,BIN,BBL,X,Y,SYSTEM,BOROCODE
POINT (-73.95353074430393 40.80297988196676),115th Street,West 115th Street,203,New York,10026,http://www.nypl.org/locations/115th-street,1055236,1018310026,997115.12977,231827.652864,NYPL,1
POINT (-73.9348475633247 40.80301816141575),125th Street,East 125th Street,224,New York,10035,http://www.nypl.org/locations/125th-street,1054674,1017890037,1002287.604,231844.894956,NYPL,1
Each function takes as input a row of the table:

Note: you should submit a file with only the standard comments at the top, and these two functions. The grading scripts will then import the file for testing. A sample test program that assumes your program is called p14.py and the CSV file is called LIBRARY.csv is test14.py.

Program 15: Plotting Challenge.Due noon, Tuesday, 28 September.
(Learning Objective: to explore and master matplotlib.pyplot commands to create data visualizations.)

The goal is to create a plot of NYC OpenData Motor Vehicle Collisions that follows this style. For example, here is the plot for January 2020 dataset:

Your program should begin by asking the user for input and output files. It should be written to take any dataset from the NYC OpenData Motor Vehicle Collisions and produce an image that matches this style. The resulting image should be saved to the output file specified by the user.

Hint: to transform the data into separate columns (i.e. "unstack"/pivot the groups to be columns) for the daily number of collisions for each borough:

boroDF = df.groupby(['BOROUGH','CRASH DATE']).count()['CRASH TIME'].unstack().transpose()
where df is the DataFrame with the collisions data.

Program 16: Choropleth Attendance Cleaning.Due noon, Thursday, 30 September.
(Learning Objective: to gain competency cleaning data using pandas functions.)

In lecture, we wrote a program, schoolsChoropleth.py, using the school district files used in Programs 10 & 11 to make a choropleth map of top English Languange Arts scores, by district, in New York City:

For this program, write a program that will clean district school attendance data so that we can use the same visualization to see attendance for different districts.

Your stand-alone program should ask the user for the input file name, the output file name, as well as the grade and school year to use as filters. For example, a sample run of the program on public-district-attendance-results-2014-2019.csv:


Enter input file name: public-district-attendance-results-2014-2019.csv
Enter output file name: attendanceThirdGrade2019.csv
Enter grade: 3
Enter year: 2018-19
If the input file starts as:

District,Grade,Year,Category,# Total Days,# Days Absent,# Days Present,% Attendance,# Contributing 20+ Total Days,# Chronically Absent,% Chronically Absent
1,All Grades,2013-14,All Students,2088851,187879,1900972,91.0,12617,3472,27.5
1,All Grades,2014-15,All Students,2064610,171200,1893410,91.7,12295,3160,25.7
1,All Grades,2015-16,All Students,1995704,169094,1826610,91.5,12137,3206,26.4
1,All Grades,2016-17,All Students,1946012,161756,1784256,91.7,11916,3110,26.1
1,All Grades,2017-18,All Students,1946527,167998,1778529,91.4,11762,3244,27.6
1,All Grades,2018-19,All Students,1925995,175153,1750842,90.9,11593,3364,29.0
then the output file would start:

District,Grade,Year,Category,# Total Days,# Days Absent,# Days Present,% Attendance,# Contributing 20+ Total Days,# Chronically Absent,% Chronically Absent
1,3,2018-19,All Students,149871,10601,139270,92.9,876,228,26.0
2,3,2018-19,All Students,491432,21170,470262,95.7,2844,278,9.8
3,3,2018-19,All Students,254506,15395,239111,94.0,1488,274,18.4

Hints:

Program 17: Grouping ELA/Math by Districts.Due noon, Friday, 1 October.
(Learning Objective: to successfully combine information from multiple input files and display the results using a pivot table.)

Your program should build on the classwork from Lectures #6 and #9 to build a pivot table, grouped by district and test subject, of the percentage of students that are proficient in each (i.e. score 3 or 4 on the exam). Your program should ask the user for two input CSV files and print a pivot table.

Then a sample run of the program with files truncated to a few schools per district for testing (ela_trunc.csv and math_trunc.csv) starts as:
Enter file containing ELA scores: ela_trunc.csv
Enter file containing MATH scores: math_trunc.csv
                    Proficiency                      School Name
District Subject
01       ELA        91.891892  THE EAST VILLAGE COMMUNITY SCHOOL
         MATH       84.615385               P.S. 184M SHUANG WEN
02       ELA        96.825397           P.S. 77 LOWER LAB SCHOOL
         MATH       98.412698           P.S. 77 LOWER LAB SCHOOL
and continues with top scoring schools for each test and each district printed.

Hints:

Program 18: Log Scale.Due noon, Monday, 4 October.
(Learning Objective: gain competency in scaling data via log transformations.)

In Lecture #9 and Section 11.5, we used log scale to visualize data. Since the logarithm function is not defined on non-positive data, we are first going to write a function that removes any tuple that has a 0 or negative value. Our second function transformed the cleaned data to its log values.

Write two functions that to be used to display data on a log-scale. Each function takes and returns two iterables of numeric values (e.g. a Series, np.array, or list restricted to numeric values). Each function takes as input a row of the table:

Note: you should submit a file with only the standard comments at the top, and these two functions. The grading scripts will then import the file for testing. A sample test program that assumes your program is called p18.py and is test18.py.

Program 19: Smoothing with Gaussians.Due noon, Tuesday, 5 October.
(Learning Objective: increase understanding of smoothing and gain fluidity with using distributions for smoothing.)

In Lecture #9 and Section 11.5, we used smoothing to visualize data. For this program, write a function that takes two arguments, an Numpy array of x-axis coordinates, and a list of numeric values, and returns the corresponding y-values for the sum of the gaussian probability distribution functions (pdf's) for each point in the list.

For example, calling the function:

xes = np.linspace(0, 10, 1000)
density = computeSmoothing(xes,[5])
plt.plot(xes,density)
plt.show()
would give the plot:

since there is only one point given (namely 5), the returned value is the probability density function centered at 5 (with scale = 0.5) computed for each of the xes.

For example, calling the function:

pts = [2,2,5,5,2,3,4,6,7,9]
xes = np.linspace(0, 10, 1000)
density = computeSmoothing(xes,pts)
plt.plot(xes,density)
plt.fill_between(xes,density)
plt.show()
would give the plot:

since the there are 10 points given, the function computes the probability density function centered at each of the points, across all the values in xes. It then sums up these contributions and returns an array of the same length as xes.

Note: you should submit a file with only the standard comments at the top, and this function. The grading scripts will then import the file for testing.

Hint: Include only the function you need (such as numpy and scipy.stats) and none of the ones for plotting (such as matplotlib.pyplot and seaborn) since this function is computing and not plotting.





Set 4: The fourth set of programs introduces modeling and estimation, focusing on loss functions and linear modeling.

Program 20: Loss Functions for Tips.Due noon, Thursday, 7 October.
(Learning Objective: strengthen competency with loss functions by applying the techniques to a dataset of tips.)

In Lecture #10 and Section 4.2, we introduced loss functions to measure how well our estimates fit the data.

Using the functions mean squared loss function mse_loss and mean absolute loss function abs_loss (Section 4.2), write two separate functions that take in estimates and tip data and returns the respective loss function for each of the estimates to the data.

Note: for each of these functions, your returned value will be an iterable with the same length as thetas.

For example, calling the function:

thetas = np.array([12, 13, 14, 15, 16, 17])
y_vals = np.array([12.1, 12.8, 14.9, 16.3, 17.2])
mse_losses = p20.mse_estimates(thetas,y_vals)
abs_losses = p20.mae_estimates(thetas,y_vals)
plt.scatter(thetas, mse_losses, label='MSE')
plt.scatter(thetas, abs_losses, label='MAE')
plt.title(r'Loss vs. $ \theta $ when $ \bf{y}$$= [ 12.1, 12.8, 14.9, 16.3, 17.2 ] $')
plt.xlabel(r'$ \theta $ Values')
plt.ylabel('Loss')
plt.legend()
plt.show()
would give the plot:

For example, calling the function:

thetas = np.arange(30)
tips_df = sns.load_dataset('tips')
tipsPercent = (tips_df['tip']/tips_df['total_bill'])*100
mse_losses = p20.mse_estimates(thetas, tipsPercent)
abs_losses = p20.mae_estimates(thetas, tipsPercent)
plt.plot(thetas, mse_losses, label='MSE')
plt.plot(thetas, abs_losses, label='MAE')
plt.title(r'Loss vs. $ \theta $ for sns tips data')
plt.xlabel(r'$ \theta $ Values')
plt.ylabel('Loss')
plt.legend()
plt.show()
would give the plot:

Note: you should submit a file with only the standard comments at the top, and this function. The grading scripts will then import the file for testing.

Hint: Include only the libraries you need (such as numpy) and none of the ones for plotting (such as matplotlib.pyplot and seaborn) since this function is computing and not plotting.

Program 21: Taxi Cleaning.Due noon, Friday, 8 October.
(Learning Objective: To build up (or refresh) skills at manipulating tabular data, in particular, to use arithmetic operations on columns to create new columns.)

Write a program, tailored to the NYC OpenData Yellow Taxi Trip Data, that asks the user for the name of an input CSV file and the name of an output CSV file. The program should open the file name provided by the user. Next, the program should copy the input file and create two new columns: percent_tip, which is 100*tip_amount/fare_amount and percent_fare, which is 100*fare_amount/total_amount. Your program should write out a new CSV file (with the name provided by the user) with the original columns as well as the two newly computed ones.

For example, if the file, taxi_new_years_day_2020.csv, was the first of January 2020 entries downloaded from 2020 Yellow Taxi Trip Data (about 170,000 entries) with the first 3 lines of:

VendorID,tpep_pickup_datetime,tpep_dropoff_datetime,passenger_count,trip_distance,RatecodeID,store_and_fwd_flag,PULocationID,DOLocationID,payment_type,fare_amount,extra,mta_tax,tip_amount,tolls_amount,improvement_surcharge,total_amount,congestion_surcharge
1,01/01/2020 12:00:00 AM,01/01/2020 12:13:03 AM,1,2.2,1,N,68,170,1,10.5,3,0.5,2.85,0,0.3,17.15,2.5
2,01/01/2020 12:00:00 AM,01/01/2020 01:08:55 AM,5,1.43,1,N,48,239,2,6.5,0.5,0.5,0,0,0.3,10.3,2.5
Then a sample run of the program:
Enter input file name: taxi_new_years_day2020.csv
Enter output file name:  taxi_Jan2020_with_percents.csv
And the first three lines of taxi_Jan2020_with_percents.csv would be:
VendorID,tpep_pickup_datetime,tpep_dropoff_datetime,passenger_count,trip_distance,RatecodeID,store_and_fwd_flag,PULocationID,DOLocationID,payment_type,fare_amount,extra,mta_tax,tip_amount,tolls_amount,improvement_surcharge,total_amount,congestion_surcharge,percent_tip,percent_fare
1.0,01/01/2020 12:00:00 AM,01/01/2020 12:13:03 AM,1.0,2.2,1.0,N,68,170,1.0,10.5,3.0,0.5,2.85,0.0,0.3,17.15,2.5,27.1,61.2
2.0,01/01/2020 12:00:00 AM,01/01/2020 01:08:55 AM,5.0,1.43,1.0,N,48,239,2.0,6.5,0.5,0.5,0.0,0.0,0.3,10.3,2.5,0.0,63.1

You should round the values stored in your new columns to the nearest tenth and save your CSV file without the indexing (i.e. index=False).

Program 22: Dice Simulator.Due noon, Thursday, 14 October.
(Learning Objective: students will be able to apply their knowledge of the built-in random package to generate simulations of simple phenomena.)

Write a function:

Since the numbers are chosen at random, the fractions will differ some from run to run. One run of the function print(p22.diceSim(6,6,10000)) resulted in:


  [0.     0.     0.0259 0.0615 0.0791 0.1086 0.139  0.1633 0.1385 0.114  0.0833 0.0587 0.0281]
or displayed using the code from
Section 16.1.1.:

d

Note: you should submit a file with only the standard comments at the top and the function. The grading scripts will then import the file for testing.

Program 23: Correlation Coefficients.Due noon, Friday, 15 October.
(Learning Objective: to refresh students' knowledge of Pearson's correlation coefficient and to increase fluidity with using statistical functions in Python.)

Write a function that will find the columns with highest absolute correlation coefficents in a DataFrame. Your program should take as inputs the column of interest, a list of possible correlated columns, and the DataFrame. The function should return the name and Pearson's R correlation coefficent (can be computed using the Pandas function series1.corr(series2) where series1 and series2 are Pandas Series):

For example, assuming your function findHighestR() was in the p23.py:

simpleDF = pd.DataFrame({'c1': [1,2,3,4],\
                         'c2': [0,1,0,1],\
                         'c3': [1,10,3,20],\
                         'c4': [-10,-20,-30,-40],})
print('Testing with c1 and [c3,c4]:')
print(p23.findHighestCorr('c1',['c3','c4'],simpleDF))
print(f'c1 has highest absolute r with {p23.findHighestCorr("c1",simpleDF.columns, simpleDF)}.')
Would give output:
Testing with c1 and [c3,c4]:
('c4', -1.0)
c1 has highest absolute r with ('c1', 1.0)
since the correlation cofficient between simpleDF['c1'] and the other 3 columns is 0.4472135954999579, 0.7520710469952336, and -1.0, respectively and the largest absolute correlation is with simpleDF['c4'].

Using the function on the seaborn tips dataset:

import seaborn as sns
tips = sns.load_dataset('tips')
print(f"Correlation coefficient between tips and size is \
        {tips['tip'].corr(tips['size'])}")
print(f"For tip, the highest correlation is \
        {p23.findHighestCorr('tip',['total_bill','size'],tips)}.")
will print
Correlation coefficient between tips and size is         0.4892987752303577
For tip, the highest correlation is         ('total_bill', 0.6757341092113641).

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import your file for testing.

Program 24: Enrollments.Due noon, Monday, 18 October.
(Learning Objective: to use standard Pandas functions to filter rows, aggregates values and create new columns.)

Write a function, computeEnrollments(), that takes a DataFrame that contains students' names, number of credits completed, and current courses (a string with the course names separated by ` `), and returns a DataFrame that

  1. Includes only students taking 3 or more courses, and
  2. Replaces the column of current courses with three different columns: the first that counts total number of courses the student is taking, the second columns has the number of computer science courses currently taking (all courses that start 'csci') and the third column with the number of other courses the student is taking.

For example, assuming your function computeEnrollments() was in the p24.py:

classDF = pd.DataFrame({'Name': ["Ana","Bao","Cara","Dara","Ella","Fatima"],\
                      '# Credits': [45,50,80,115,30,90],\
                      'Current Courses': ["csci160 csci235 math160 jpn201",\
                                          "csci160 csci235 cla101 germn241",\
                                          "csci265 csci335 csci39542 germn241",\
                                          "csci49362 csci499",\
                                          "csci150 csci235 math160",\
                                          "csci335 csci39542 cla101 dan102"]})
print(f'Starting df:\n {classDF}')
print(f'Ending df:\n {p24.computeEnrollments(classDF)}')
Would give output:
Starting df:
      Name  # Credits                     Current Courses
0     Ana         45      csci160 csci235 math160 jpn201
1     Bao         50     csci160 csci235 cla101 germn241
2    Cara         80  csci265 csci335 csci39542 germn241
3    Dara        115                   csci49362 csci499
4    Ella         30             csci150 csci235 math160
5  Fatima         90     csci335 csci39542 cla101 dan102

Ending df:
      Name  # Credits  NumCourses  CS  Other
0     Ana         45           4   2      2
1     Bao         50           4   2      2
2    Cara         80           4   3      1
4    Ella         30           3   2      1
5  Fatima         90           4   2      2

The resulting DataFrame has only 5 students, since the student, Dara, has fewer than 3 current courses and that row is dropped.

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Program 25: PMF of Senators' Ages.Due noon, Tuesday, 19 October.
(Learning Objective: to build intuition and strengthen competency with probability mass functions by analysing ages of public officials.)

Section 16.1 (Random Variables) of the textbook has a small example of computing the probability mass function of a data set of ages was computed by hand. Write a function that will automate this process:

For example, calling the function on the example from the textbook:

x, y = p25.pmf([50,50,52,54])
print(f'The values are: {x}')
print(f'The pmf is: {y}')
print(f'The sum of the pmf is: {sum(y)}.')
plt.bar(x,y)
plt.show()
would print:
The values are: (50, 52, 54)
The pmf is: (0.5, 0.25, 0.25)
The sum of the pmf is: 1.0.
and would give the plot:

For example, calling the function on the senators' ages from Program 3:

senators = pd.read_csv('senatorsAges.csv')
xSen,ySen = p25.pmf(senators['age'])
plt.bar(xSen,ySen)
plt.show()
would give the plot:

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hint: Include only the libraries you need (such as numpy) and none of the ones for plotting (such as matplotlib.pyplot and seaborn) since this function is computing and not plotting.

Program 26: Weekday Entries.Due noon, Thursday, 21 October.
(Learning Objective: to strengthen data cleaning skills and familiarity with standard date/time formats.)

Use the date time functionality of Pandas to write the following functions:

Give example with green taxi data from seaborn, use first 10 lines, and create new column as well as filter for weekdays.

For example, using the Seaborn's Green Taxi Data Set and assuming your functions are in the p26.py:

taxi = sns.load_dataset('taxis')
print(taxi.iloc[0:10])  #Print first 10 lines:
taxi['tripTime'] = taxi.apply(lambda x: p26.tripTime(x['pickup'], x['dropoff']), axis=1)
print(taxi.iloc[0:10])
Would give output:
                pickup              dropoff  ...  pickup_borough  dropoff_borough
0  2019-03-23 20:21:09  2019-03-23 20:27:24  ...       Manhattan        Manhattan
1  2019-03-04 16:11:55  2019-03-04 16:19:00  ...       Manhattan        Manhattan
2  2019-03-27 17:53:01  2019-03-27 18:00:25  ...       Manhattan        Manhattan
3  2019-03-10 01:23:59  2019-03-10 01:49:51  ...       Manhattan        Manhattan
4  2019-03-30 13:27:42  2019-03-30 13:37:14  ...       Manhattan        Manhattan
5  2019-03-11 10:37:23  2019-03-11 10:47:31  ...       Manhattan        Manhattan
6  2019-03-26 21:07:31  2019-03-26 21:17:29  ...       Manhattan        Manhattan
7  2019-03-22 12:47:13  2019-03-22 12:58:17  ...       Manhattan        Manhattan
8  2019-03-23 11:48:50  2019-03-23 12:06:14  ...       Manhattan        Manhattan
9  2019-03-08 16:18:37  2019-03-08 16:26:57  ...       Manhattan        Manhattan

[10 rows x 14 columns]
                pickup              dropoff  ...  dropoff_borough        tripTime
0  2019-03-23 20:21:09  2019-03-23 20:27:24  ...        Manhattan 0 days 00:06:15
1  2019-03-04 16:11:55  2019-03-04 16:19:00  ...        Manhattan 0 days 00:07:05
2  2019-03-27 17:53:01  2019-03-27 18:00:25  ...        Manhattan 0 days 00:07:24
3  2019-03-10 01:23:59  2019-03-10 01:49:51  ...        Manhattan 0 days 00:25:52
4  2019-03-30 13:27:42  2019-03-30 13:37:14  ...        Manhattan 0 days 00:09:32
5  2019-03-11 10:37:23  2019-03-11 10:47:31  ...        Manhattan 0 days 00:10:08
6  2019-03-26 21:07:31  2019-03-26 21:17:29  ...        Manhattan 0 days 00:09:58
7  2019-03-22 12:47:13  2019-03-22 12:58:17  ...        Manhattan 0 days 00:11:04
8  2019-03-23 11:48:50  2019-03-23 12:06:14  ...        Manhattan 0 days 00:17:24
9  2019-03-08 16:18:37  2019-03-08 16:26:57  ...        Manhattan 0 days 00:08:20

[10 rows x 15 columns]

Using the function our second function:

taxi = sns.load_dataset('taxis')
weekdays = p26.weekdays(taxi,'pickup')
print(weekdays.iloc[0:10])
will give output:

  pickup              dropoff  ...  pickup_borough  dropoff_borough
1   2019-03-04 16:11:55  2019-03-04 16:19:00  ...       Manhattan        Manhattan
2   2019-03-27 17:53:01  2019-03-27 18:00:25  ...       Manhattan        Manhattan
5   2019-03-11 10:37:23  2019-03-11 10:47:31  ...       Manhattan        Manhattan
6   2019-03-26 21:07:31  2019-03-26 21:17:29  ...       Manhattan        Manhattan
7   2019-03-22 12:47:13  2019-03-22 12:58:17  ...       Manhattan        Manhattan
9   2019-03-08 16:18:37  2019-03-08 16:26:57  ...       Manhattan        Manhattan
11  2019-03-20 19:39:42  2019-03-20 19:45:36  ...       Manhattan        Manhattan
12  2019-03-18 21:27:14  2019-03-18 21:34:16  ...       Manhattan        Manhattan
13  2019-03-19 07:55:25  2019-03-19 08:09:17  ...       Manhattan        Manhattan
14  2019-03-27 12:13:34  2019-03-27 12:25:48  ...       Manhattan        Manhattan

[10 rows x 14 columns]
note that rows 0,4,8, and 10 have been dropped from the original DataFrame since those corresponded to weekend days.

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Program 27: Fitting OLS.Due noon, Friday, 22 October.
(Learning Objective: to build intuition and strengthen competency with least squares method of minimizing functions.)

Write a function, compute_r_line(), that takes two iterables of numeric values representing the independent variable (xes) and the dependent variable (yes) and computes the slope and y-intercept of the linear regression line using ordinary least squares. See DS 8: Chapter 15 The pseudocode for this:

  1. Compute the standard deviation of the xes and yes. Call these sd_x and sd_y.
  2. Compute the correlation, r, of the xes and yes.
  3. Compute the slope, m, as m = r*sd_y/sd_x.
  4. Compute the y-intercept, b, as b = yes[0] - m * xes[0]
  5. Return m and b.

For example, calling the function on the example from the textbook:

s1 = [1,2,3,4,5,6,7,8,9,10]
s2 = [0,1,1,2,2,3,3,4,4,5,]
m, b = p27.compute_r_line(s1,s2)
print(m,b)
xes = np.array([0,10])
yes = m*xes + b
plt.scatter(s1,s2)
plt.plot(xes,yes)
plt.title(f'Regression line with m = {m:{4}.{2}} and y-intercept = {b:{4}.{2}}')
plt.show()
would give the plot:

For example, calling the function on the senators' ages from Program 3:

taxi = sns.load_dataset('taxis')
m, b = p27.compute_r_line(taxi['total'],taxi['tip'])
print(m,b)
xes = np.array([0,175])
yes = m*xes + b
plt.scatter(taxi['total'],taxi['tip'])
plt.plot(xes,yes,color='red')
plt.title(f'Regression line for total vs. tips with m = {m:{4}.{2}} and y-intercept = {b:{4}.{2}}')
plt.xlabel('Total Paid')
plt.ylabel('Tip')
plt.show()
would give the plot:

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hint: Include only the libraries you need (such as numpy) and none of the ones for plotting (such as matplotlib.pyplot and seaborn) since this function is computing and not plotting.

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Program 28: CS Courses.Due noon, Monday, 25 October.
(Learning Objective: to strengthen data cleaning skills using Pandas.)

In Program 24, we wrote a function that counted courses that students are currently taking. For this program, write a function that takes a DataFrame and returns a sorted list of the computer science courses taken. Each course should occur once in the list, no matter how often it occurs in the list of courses taken by the students

For example, assuming your function csCourses(df) was in the p28.py:

classDF = pd.DataFrame({'Name': ["Ana","Bao","Cara","Dara","Ella","Fatima"],\
                         '# Credits': [45,50,80,115,30,90],\
                         'Current Courses': ["csci160 csci235 math160 jpn201",\
                                             "csci160 csci235 cla101 germn241",\
                                             "csci265 csci335 csci39542 germn241",\
                                             "csci49362 csci499",\
                                             "csci150 csci235 math160",\
                                             "csci335 csci39542 cla101 dan102"]})


print(f'Starting df:\n {classDF}\n')
print(f'CS courses:\n {p28.csCourses(classDF)}')
Would give output:
Starting df:
      Name  # Credits                     Current Courses
0     Ana         45      csci160 csci235 math160 jpn201
1     Bao         50     csci160 csci235 cla101 germn241
2    Cara         80  csci265 csci335 csci39542 germn241
3    Dara        115                   csci49362 csci499
4    Ella         30             csci150 csci235 math160
5  Fatima         90     csci335 csci39542 cla101 dan102

CS courses:
 ['csci150', 'csci160', 'csci235', 'csci265', 'csci335', 'csci39542', 'csci49362', 'csci499']

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Program 29: Predictions with MLM's.Due noon, Tuesday, 26 October.
(Learning Objective: to build intuition and strengthen competency with existing methods for computing multiple linear regression.)

Write a program that asks the user for the following inputs:

Your program should build a linear model based on the dataframe and two independent variables that will predict the value of the dependent variable. Your program should then predict the dependent variable, based on the two inputted independent variables.

Use the LinearRegression() from scikit-learn to fit the model and predict the value. For example, if you were working with the mpg dataset from seaborn:

from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(mpg[['horsepower','weight']], mpg['mpg'])
would fit the model to the independent variables horsepower and weight to predict the dependent variable mpg.

To predict, using this model:

New_horsepower = 200
New_weight = 3500
print (f'Predicted value: {regr.predict([[New_horsepower,New_weight]])[0]}')
would print:
Predicted value: 15.900087446128559

A sample run of your program would look like:

Enter name of CSV:  mpg.csv
Enter name of first independent variable: displacement
Enter name of second independent variable: acceleration
Enter name of the dependent variable: mpg
Enter value for first variable for prediction: 100
Enter value for second variable for prediction: 12.0
which would output:
Predicted mpg:  29.400598924519038
Your output should contain the dependent variable name followed by a colon and the predicted value.

Note: your program should ask separately ask for input 6 times, in the order listed above. Changing the order or combining the inputs into fewer lines will cause the autograder to crash.

Program 30: Computing Ranges.Due noon, Thursday, 28 October.
(Learning Objective: to increase understanding and skills for manipulating numeric and date data.)

Write a function that computes the range of values a column takes (i.e. the difference between the maximum and minimum values). The column contains numeric values, unless the flag datetime is set to True. If the datetime flag is true, the input column contains strings representing datetime objects ( overview of datetime in Pandas) and the function should return the range in seconds.

For example, assuming your function colRange() was in the p30.py:

simpleDF = pd.DataFrame({'id': [1,2,3,4],\
    'checkin': ["2019-03-23 20:21:09","2019-03-23 20:27:24",\
                "2019-03-22 12:47:13","2019-03-22 12:58:17"],\
    'total': [32.51,19.99,1.05,20.50]})
print(f"Testing colRange(simpleDF,'id'): {p30.colRange(simpleDF,'id')}")
print(f"Testing colRange(simpleDF,'checkin',datetime=True): {p30.colRange(simpleDF,'checkin',datetime=True)}")
Would give output:
Testing colRange(simpleDF,'id'): 3
Testing colRange(simpleDF,'checkin',datetime=True): 114011.0

Using the function on the 10 lines of seaborn taxis dataset:

import seaborn as sns
taxis = sns.load_dataset('tips').dropna().loc[:10]
print(f"Testing colRange(taxis,'total'): {p30.colRange(taxis,'total')}")
print(f"Testing colRange(taxis,'dropoff',datetime=True): {p30.colRange(taxis,'dropoff',datetime=True):}")
will print
Testing colRange(taxis,'distance'): 7.21
Testing colRange(taxis,'dropoff',datetime=True): 2236694.0

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hint: See Pandas' datetime overview. A useful function is dt.total_seconds() which returns the total seconds of a datetime object.

Program 31: Sampling Distributions.Due noon, Friday, 29 October.
(Learning Objective: to refresh understanding of normal distributions and introduce sampling standard techniques in Python.)

In Lecture #13, we introduced the pd.sample() function for sampling rows of a DataFrame. Echoing the Central Limit TheoremUBC sampling demo, write a function that computes sample means for a given column of a DataFrame and returns a numpy array of those means.

For example, assuming your function sampleMeans() was in the p31.py:

nd = [np.random.normal() for i in range(1000)]
ed = [np.random.exponential() for i in range(1000)]
df = pd.DataFrame({ "nd" : nd, "ed" : ed})
print(p31.sampleMeans(df, 'nd', k = 5, n=5))
print(p31.sampleMeans(df, 'nd', k = 10, n=5))
would print in a sample run
[ 0.18006227 -0.02046562  0.13301251  0.52114451  0.47197969]
[ 0.06028354 -0.48566047  0.02343676 -0.28361692  0.25259547]

Continuing the example:

k_10 = p31.sampleMeans(df, 'ed', k = 10)
k_20 = p31.sampleMeans(df, 'ed', k = 20)
k_30 = p31.sampleMeans(df, 'ed', k = 30)
sns.histplot([ed,k_10,k_20,k_30],element="poly")
plt.title('Means of 1000 samples of an exponential distribution')
plt.show()
would display:

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Program 32: Attendance.Due noon, Monday, 1 November.
(Learning Objective: to introduce simple feature engineering and reinforce datetime skills.)

In Lectures #14 and #15, we discussed the hypothesis that NYC public schools have lower attendance on Fridays. For this program, write a function that takes a DataFrame of school attendance records (following the format from NYC OpenData) and returns the correlation coefficent between the day of the week and daily attendance (computed as a percentage of students present of those enrolled at the school).

For example, assuming your function attendCorr() was in the p32.py:

df = pd.read_csv('dailyAttendanceManHunt2018.csv')
print(p32.attendCorr(df))
would print -0.014420727967150241 for the sample data set for Manhattan Hunter High School (see lecture notes for obtaining additional datasets). A plot of the data is:

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Program 33: Confidence Intervals.Due noon, Tuesday, 2 November.
(Learning Objective: to reinforce concepts from prerequisite statistics course and build corresponding facility in Python.)

In Lecture #13, we discussed the UBC confidence interval demo, where a normal distribution (of length of fish) was repeated sampled. For each sample, the confidence interval of the sample mean was computed and stored. And, for each sample, it was checked if the true mean of the distribution was contained in the confidence interval of the sample mean:

For this program, echo the UBC confidence interval demo. Write a function that computes the confidence intervals and tabulates the running successes of the confidence interval of the sample mean capturing the true mean of the population:

For example, assuming your function ciRuns() was in the p33.py, then a possible run is:

intervals, successes = p33.ciRuns(trials = 20)
print(f"intervals: {intervals}")
print(f"successes: {successes}")
      
would print in a sample run
intervals: [(0.0843959275632028, 1.3323778628928307), (-0.146668094360358, 1.5546642787617675), (-1.029505009635772, 0.5272177024225991), (-0.5702633299624739, 0.5144718024588405), (-0.3979475729570697, 1.1005279531825056), (-0.9894141075519297, 0.8070447535623141), (-1.0433450932702595, 0.7059405804735273), (-0.8902508132395719, 0.3772852944801963), (-1.1068858052695578, 0.0816760750250739), (-0.3661920360152307, 1.003198126280235)]
successes: [0.0, 50.0, 66.66666666666667, 75.0, 80.0, 83.33333333333333, 85.71428571428571, 87.5, 88.88888888888889, 90.0]
Since the first inteval doesn't contain the mean mu = 0, the first entry in successes is 0. The next interval does contain the mean, so half or 50 percent of the first two runs have been successful. Similarly, for each of the remaining runs, the running total of percent successful continues to increase until it reaches 90 percent. Since we are generating the samples randomly, these numbers will change from run to run, and as we increase the number of trials, the percentage success will converge to alpha = 95.

Another possible run, where we plot the values to see the results better:

import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
intervals, successes = p33.ciRuns(mu=500, sigma=100, alpha = .90, trials = 1000)
xes = np.linspace(1,1000,1000)
yes = 90*np.ones(1000)
plt.scatter(xes,successes)
plt.plot(xes,yes,color='red')
plt.title('alpha=90, mu = 500, sigma=100, & trials=1000')
plt.show()
would display:

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Program 34: Polynomial Features.Due noon, Thursday, 4 November.
(Learning Objective: to strenghten understanding of regression models and employ thresholds to decide model fitness.)

Following the textbook code demostration in Lecture 16, write a function that takes values of an independent variable and corresponding values of a dependent varaible, and fits polynomial regression models of increasing degree until the MSE falls below error.

For example, assuming your function fitPoly() was in the p34.py and the code is the ice cream ratings example from Chapter 20,

df = pd.read_csv('icecream.csv')
print(f'Starting df:\n {df}')
eps = 0.5
deg = p34.fitPoly(df,'sweetness','overall',epsilon=eps)
print(f'For epsilon = {eps}, poly has degree {deg}.')
would print in a sample run:
Starting df:
    sweetness  overall
0        4.1      3.9
1        6.9      5.4
2        8.3      5.8
3        8.0      6.0
4        9.1      6.5
5        9.8      6.1
6       11.0      5.9
7       11.7      5.5
8       11.9      5.4
For epsilon = 0.5, poly has degree 1.

Continuing the example, if we lower the threshold,

eps= 0.1
deg = p34.fitPoly(df,'sweetness','overall',epsilon=eps)
print(f'For epsilon = {eps}, poly has degree: {deg}.')
would print in a sample run:
For epsilon = 0.1, poly has degree: 2.

If we lower the threshold to the default (0.01),

eps= 0.01
deg = p34.fitPoly(df,'sweetness','overall')
print(f'For epsilon = {eps}, poly has degree: {deg}.')
would print in a sample run:
For default epsilon, poly has degree: 8.

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Program 35: Parking Tickets.Due noon, Friday, 5 November.
(Learning Objective: to build familiarity with different approaches for encoding categorical data.)

Recent news articles focused on the significantly higher percentage of parking tickets that are unpaid for cars with out-of-state plates:

The data is aggregated across the whole city, what happens when focused on individual neighborhoods? Similarly, a high fraction of motor vehicle collisions involve cars registered out-of-state (see crash analysis by Streetsblog NYC). How does that affect your neighborhood?

Write a function that will takes a DataFrame and column and returns a new DataFrame with a column that is 1 if the column contains the indicator or 0 otherwise. Your function should allow the new column name and indicator value to be customized in the parameter list. The default values are Registration State and NY for the parameters colName and indicator. You can assume that the column contains the indicator as a value and that each row is blank or contains a single categorical value for each row (i.e. will contain NY but never two different values such as NY, NJ). The new column should be named by the indicator value (i.e. NY for the default).

For example, assuming your function addIndicator() was in the p35.py:

df = pd.read_csv('Parking_Violations_Issued_Precinct_19_2021.csv',low_memory=False)
df['Issue Date'] = pd.to_datetime(df['Issue Date'])
dff = p35.addIndicator(df)
print(dff)
print(f'Of the {len(dff)} violations for first half of 2021 for Upper East Side (PD District 19),\n \
      {len(dff[dff.NY == 1])} are for cars registered in New York.')
would print:
          Summons Number Plate ID  ... Double Parking Violation NY
0           1474094223  KDT3875  ...                      NaN  1
1           1474094600  GTW5034  ...                      NaN  1
2           1474116280  HXM6089  ...                      NaN  1
3           1474116310  HRW4832  ...                      NaN  1
4           1474143209  JPR6583  ...                      NaN  1
...                ...      ...  ...                      ... ..
451504      8954357854  JRF3892  ...                      NaN  1
451505      8955665040   199VP4  ...                      NaN  0
451506      8955665064   196WL7  ...                      NaN  0
451507      8970451729  CNK4113  ...                      NaN  1
451508      8998400418   XJWV98  ...                      NaN  0

[451509 rows x 44 columns]
Of the 451509 violations for first half of 2021 for Upper East Side (PD District 19),
       338282 are for cars registered in New York.
Continuing the example:
dfff = p35.addIndicator(dff, colName = 'Vehicle Color', indicator="RED")
print(dfff)
plt.xlim(pd.to_datetime("01/01/21"),pd.to_datetime("06/30/21"))
sns.histplot(data=dfff, x = 'Issue Date', hue = 'RED', binwidth = 7)
plt.title('Parking violations for Upper East Side, Jan-Jul 2021')
plt.show()
would print:
Summons Number Plate ID  ... NY RED
0           1474094223  KDT3875  ...  1   0
1           1474094600  GTW5034  ...  1   0
2           1474116280  HXM6089  ...  1   0
3           1474116310  HRW4832  ...  1   0
4           1474143209  JPR6583  ...  1   0
...                ...      ...  ... ..  ..
451504      8954357854  JRF3892  ...  1   0
451505      8955665040   199VP4  ...  0   0
451506      8955665064   196WL7  ...  0   0
451507      8970451729  CNK4113  ...  1   0
451508      8998400418   XJWV98  ...  0   0

[451509 rows x 45 columns]
would give the plot:

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Program 36: Multiple Locations.Due noon, Monday, 8 November.
(Learning Objective: to reinforce data cleaning and aggregation skills for DataFrames.)

The OpenData NYC Open Restaurant Applications dataset of applications from food service establishments seeking authorization to place outdoor seating in front of their business on the sidewalk, roadway, or both. Some establishments are listed multiple times since they have multiple locations. Others have duplicate listings for the same location due submitting multiple times or for different kinds of permits (use of sidewalk, use of roadway, etc.)

Write a program that takes a DataFrame of restaurants and returns a DataFrame with each Restaurant occurring exactly once and two new columns: Number_Submissions which contains the number of times that restaurant occurs in any entry (smallest value is 1) and Locations, a list consisting of the unique location addresses.

For example, assuming your function restaurantLocs() was in the p36.py for the file applications_coffee_truncated.csv, the code:

df = pd.read_csv('applications_coffee_truncated.csv')
newDF = p36.restaurantLocs(df)
print(newDF)

would print in:

                                        Num_Submissions                                          Locations
Restaurant Name
BLUESPOON COFFEE                                      1                [76 CHAMBERS STREET, Manhattan, NY]
Black Fox Coffee                                      2  [45 East 45th, Manhattan, NY, 70 Pine Street, ...
Black Press Coffee                                    1                  [274 Columbus Ave, Manhattan, NY]
Blackstone Coffee Roasters                            1                 [502 Hudson Street, Manhattan, NY]
Blank Slate Coffee + Kitchen (Midtown)                1                    [941 2nd Avenue, Manhattan, NY]
Blank Slate Coffee + Kitchen (NoMad)                  1                [121 Madison Avenue, Manhattan, NY]
Blue Bottle Coffee                                    1              [450 West 15th street, Manhattan, NY]
Blue Bottle Coffee Gramercy                           1                    [257 Park Ave S, Manhattan, NY]
Daniels Coffee and more                               1                     [1050  3rd ave, Manhattan, NY]
FOREVER COFFEE BAR                                    1              [714 WEST  181 STREET, Manhattan, NY]
GREGORY'S COFFEE                                      1                   [80 BROAD STREET, Manhattan, NY]
GREGORYS COFFEE                                       2  [551 FASHION AVENUE, Manhattan, NY, 485 LEXING...
GROUND CENTRAL COFFEE COMPANY                         1                      [888 8 AVENUE, Manhattan, NY]
Gregorys Coffee                                      18  [58 West 44th, Manhattan, NY, 649 Broadway, Ma...
JOE: THE ART OF COFFEE                                1              [405 WEST   23 STREET, Manhattan, NY]
Kuro Kuma Espresso & Coffee                           1               [121 La Salle Street, Manhattan, NY]
Lenox Coffee                                          1             [60  West 129th street, Manhattan, NY]
Partners Coffee                                       1                 [44 Charles Street, Manhattan, NY]
Patent Coffee / Patent Pending                        1               [49 West 27th Street, Manhattan, NY]
Ralph's Coffee                                        1                [888 Madison Avenue, Manhattan, NY]
STUMPTOWN COFFEE ROASTERS                             1               [30 WEST    8 STREET, Manhattan, NY]
Starbucks Coffee                                      2                     [605 Third Ave, Manhattan, NY]
Starbucks Coffee Company                              1                     [684  6th ave , Manhattan, NY]
THINK COFFEE                                          1              [500 WEST   30 STREET, Manhattan, NY]
jacks stir brew coffee                                1             [10  10 downing street, Manhattan, NY]
le cafe coffee                                        5  [1440 broadway, Manhattan, NY, 7  east 14 st, ...

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Think about what functions, built-in or ones that you code, that could be used for aggfunc.

Program 37: Score Predictor.Due noon, Tuesday, 9 November.
(Learning Objective: to introduce logistic regression approaches implemented in sklearn.)

In Chapter 24 and Lectures #17 and #18, we worked through a logistic model to predict scoring attempts based on a single independent variable shot_distance, as well as a second model that used multiple independent variables, ['shot_distance', 'minute', 'action_type', 'shot_type', 'opponent']. It was noted that the prediction accurracy increased from 0.6 using just shot_distance to 0.725 using the entire list. Are all of those additional variables necessary to get the increased accuracy?

For this program, write a function that identifies which variable increases the accuracy of the oringal model the most.

For example, assuming your function bestForPredict() was in the p37.py for the file lebron.csv, the code:

df = pd.read_csv('lebron.csv')
columns = ['minute', 'action_type', 'shot_type', 'opponent']
acc,col_name = p37.bestForPredict(df,columns)
print(f'The highest accuracy, {acc}, was obtained by including column, {col_name}.')
would print:
The highest accuracy, 0.725, was obtained by including column, action_type.

Another example with the same DataFrame:

columns = ['minute', 'opponent']
acc,col_name = p37.bestForPredict(df,columns, test_size = 100, random_state = 17)
print(f'The highest accuracy, {acc}, was obtained by including column, {col_name}.')
would print:
The highest accuracy, 0.6, was obtained by including column, opponent.

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Hints:

Program 38: Ticket Prep.Due noon, Thursday, 11 November.
(Learning Objective: to employ aggregation and data cleaning techniques to prepare data for use in classification.)

Can you predict which cars will get excessive number of tickets? In Lectures #15-18 and
Chapter 24, we focused on the building of the classifers. This question focuses on the prerequisite step: preparing the data that is used in building the classifier.

As a first step, we will group by licence plate number and aggregate the state, vehicle type and color by choosing the first item stored for each:

newDF =  df.groupby('Plate ID').agg(NumTickets =
    pd.NamedAgg(column = 'Plate ID', aggfunc = 'count'),
    Registration = pd.NamedAgg(column = 'Plate Type', aggfunc = 'first'),
    State = pd.NamedAgg(column = 'Registration State', aggfunc = 'first'),
    VehicleColor = pd.NamedAgg(column = 'Vehicle Color', aggfunc = 'first'))

While this works well for State, Registration and Vehicle Color:

print(f'Registration: {newDF['Registration'].unique()})
print(f'State: {newDF['State'].unique()})
print(f'VehicleColor: {newDF['VehicleColor'].unique()})
prints expected values for states but many different types of registrations and abbreviations and mispellings for colors:
Registration: ['PAS' 'COM' 'USC' 'MOT' 'LMB' '999' 'CMB' 'RGL' 'SRF' 'MED' 'APP' 'ORG'
 'ITP' 'OMR' 'TRA' 'BOB' 'SPO' 'LMA' 'VAS' 'OML' 'TOW' 'DLR' 'AMB' 'TRC'
 'STG' 'AGR' 'NLM' 'ORC' 'IRP' 'TRL' 'MCL' 'OMT' 'SCL' 'SPC' 'CHC' 'HIS'
 'SRN' 'RGC' 'PHS' 'PSD' 'MCD' 'NYA' 'JCA' 'SOS' 'CSP' 'OMS' 'CBS' 'OMV'
 'HAM']
State: ['DP' 'NJ' 'PA' 'TX' 'OK' 'NY' 'OH' '99' 'DC' 'AR' 'IL' 'MN' 'NC' 'NV'
 'FL' 'GV' 'CA' 'NH' 'MD' 'CT' 'MO' 'RI' 'MS' 'MA' 'MI' 'TN' 'WV' 'AL'
 'OR' 'KS' 'VA' 'KY' 'AZ' 'WA' 'NM' 'CO' 'SC' 'WI' 'ME' 'DE' 'HI' 'IN'
 'WY' 'MT' 'NE' 'VT' 'GA' 'LA' 'SD' 'ON' 'IA' 'ID' 'ND' 'SK' 'UT' 'AK'
 'QB' 'AB' 'BC' 'MX' 'PR' 'NS' 'MB' 'FO']
VehicleColor: ['BLACK' 'SILVE' 'GREY' 'WHITE' 'RED' 'OTHER' 'BLUE' 'GY' 'BLK' 'BK'
 'PURPL' 'TAN' 'GREEN' 'YELLO' 'ORANG' 'BL' 'SILV' 'GRAY' 'BROWN' nan
 'GRY' 'WH' 'SIL' 'GOLD' 'WT' 'WHT' 'GR' 'RD' 'YW' 'BR' 'LTG' 'WH/' 'OR'
 'WHB' 'TN' 'BRN' 'MR' 'DK/' 'BLW' 'GL' 'PR' 'BU' 'DKB' 'W' 'GRT' 'ORG'
 'RD/' 'LT/' 'NO' 'LTT' 'GRN' 'BN' 'TB' 'BRO' 'B' 'RDW' 'SL' 'BURG' 'BLU'
 'NOC' 'BK/' 'DKG' 'WHG' 'PINK' 'G' 'LAVEN' 'BL/' 'YEL' 'OG' 'GRW' 'WHI'
 'WHTE' 'BUR' 'GY/' 'DKR' 'RDT' 'GN' 'BUN' 'SV' 'BKG' 'YELL' 'WHIT' 'GR/'
 'LTTN' 'SLV' 'BRWN' 'GYB' 'WHTIE' 'WI' 'BUS' 'LTB' 'TN/' 'GD' 'MAROO'
 'BW' 'BLG' 'ORA' 'GRA' 'DKP' 'NAVY' 'GREG' 'GRB' 'BRW' 'BBRN' 'R' 'GRRY'
 'BLA' 'BG' 'MAR' 'BURGA' 'BRWON' 'YLW' 'ORNG' 'HREY' 'DERD' 'YL' 'PLE'
 'BWN' 'BI']
The first two registration types account are the most common:
count = len(newDF)
pasCount = len(newDF[newDF['Registration'] == 'PAS'])
comCount = len(newDF[newDF['Registration'] == 'COM'])
print(f'{count} different vehicles, {100*(pasCount+comCount)/count} percent are passenger or commercial plates.')

And for the Precinct District 19 dataset that contains almost a half million tickets:

159928 different vehicles, 93.95477965084288 percent are passenger or commercial plates.
And similarly, 15 of the entries for vehicle color account for most of the entries:
print(newDF['VehicleColor'].unique())
print(f"The top 15 values account for {100*newDF['VehicleColor'].value_counts()[:15].sum()/len(newDF)} percent.")
print(f"Those values are: {newDF['VehicleColor'].value_counts()[:15]}.")
The top 15 values account for 95.37291781301586 percent.
Those values are:
WH       27814
GY       24704
WHITE    20817
BK       20778
BLACK    14486
GREY      9629
BL        9249
SILVE     5704
BLUE      5300
RD        4395
RED       3303
OTHER     2678
GR        1674
BROWN     1059
TN         938

To clean the data, write two functions that can be applied to the DataFrame:

After applying these functions, the resulting DataFrame can then be used to build a classifer on how likely a particular car is to be one that has more than a ticket a day (see Program 42).

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Program 39: Binary Digit Classification.Due noon, Friday, 12 November.
(Learning Objective: to build classifers using sklearn.)

This program uses the canonical MNIST dataset of hand-written digits discussed in Lecture #19 and available in sklearn digits dataset:

The dataset has 1797 scans of hand-written digits. Each entry has the digit represented (target) as well as the 64 values representing the gray scale for the 8 x 8 image. The first 5 entries are:

The gray scales for the first 5 entries, flattened to one dimensional array:

[[ 0.  0.  5. 13.  9.  1.  0.  0.  0.  0. 13. 15. 10. 15.  5.  0.  0.  3. 15.  2.  0. 11.  8.  0.  0.  4. 12.  0.  0.  8.  8.  0.  0.  5.  8.  0.  0.  9.  8.  0.  0.  4. 11.  0.  1. 12.  7.  0.  0.  2. 14.  5. 10. 12.  0.  0.  0.  0.  6. 13. 10.  0.  0.  0.]
 [ 0.  0.  0. 12. 13.  5.  0.  0.  0.  0.  0. 11. 16.  9.  0.  0.  0.  0.  3. 15. 16.  6.  0.  0.  0.  7. 15. 16. 16.  2.  0.  0.  0.  0.  1. 16. 16.  3.  0.  0.  0.  0.  1. 16. 16.  6.  0.  0.  0.  0.  1. 16. 16.  6.  0.  0.  0.  0.  0. 11. 16. 10.  0.  0.]
 [ 0.  0.  0.  4. 15. 12.  0.  0.  0.  0.  3. 16. 15. 14.  0.  0.  0.  0.  8. 13.  8. 16.  0.  0.  0.  0.  1.  6. 15. 11.  0.  0.  0.  1.  8. 13. 15.  1.  0.  0.  0.  9. 16. 16.  5.  0.  0.  0.  0.  3. 13. 16. 16. 11.  5.  0.  0.  0.  0.  3. 11. 16.  9.  0.]
 [ 0.  0.  7. 15. 13.  1.  0.  0.  0.  8. 13.  6. 15.  4.  0.  0.  0.  2.  1. 13. 13.  0.  0.  0.  0.  0.  2. 15. 11.  1.  0.  0.  0.  0.  0.  1. 12. 12.  1.  0.  0.  0.  0.  0.  1. 10.  8.  0.  0.  0.  8.  4.  5. 14.  9.  0.  0.  0.  7. 13. 13.  9.  0.  0.]
 [ 0.  0.  0.  1. 11.  0.  0.  0.  0.  0.  0.  7.  8.  0.  0.  0.  0.  0.  1. 13.  6.  2.  2.  0.  0.  0.  7. 15.  0.  9.  8.  0.  0.  5. 16. 10.  0. 16.  6.  0.  0.  4. 15. 16. 13. 16.  1.  0.  0.  0.  0.  3. 15. 10.  0.  0.  0.  0.  0.  2. 16.  4.  0.  0.]]

To start, we will focus on entries that represent 0's and 1's. The first 10 from the dataset are displayed below:

Write a function that builds a logistic regression model that classifies binary digits:

For example, let's flatten the entries and restrict the dataset to just binary digits, as we did in lecture:

#Import datasets, classifiers and performance metrics:
from sklearn import datasets, svm, metrics
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
#Using the digits data set from sklearn:
from sklearn import datasets
digits = datasets.load_digits()
print(digits.target)
print(type(digits.target), type(digits.data))
#flatten the images
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
print(data[0:5])
print(f'The targets for the first 5 entries: {digits.target[:5]}')
#Make a DataFrame with just the binary digits:
binaryDigits = [(d,t) for (d,t) in zip(data,digits.target) if t <= 1]
bd,bt = zip(*binaryDigits)
print(f'The targets for the first 5 binary entries: {bt[:5]}')
which will print:

  [0 1 2 ... 8 9 8]
 
[[ 0.  0.  5. 13.  9.  1.  0.  0.  0.  0. 13. 15. 10. 15.  5.  0.  0.  3.
  15.  2.  0. 11.  8.  0.  0.  4. 12.  0.  0.  8.  8.  0.  0.  5.  8.  0.
   0.  9.  8.  0.  0.  4. 11.  0.  1. 12.  7.  0.  0.  2. 14.  5. 10. 12.
   0.  0.  0.  0.  6. 13. 10.  0.  0.  0.]
 [ 0.  0.  0. 12. 13.  5.  0.  0.  0.  0.  0. 11. 16.  9.  0.  0.  0.  0.
   3. 15. 16.  6.  0.  0.  0.  7. 15. 16. 16.  2.  0.  0.  0.  0.  1. 16.
  16.  3.  0.  0.  0.  0.  1. 16. 16.  6.  0.  0.  0.  0.  1. 16. 16.  6.
   0.  0.  0.  0.  0. 11. 16. 10.  0.  0.]
 [ 0.  0.  0.  4. 15. 12.  0.  0.  0.  0.  3. 16. 15. 14.  0.  0.  0.  0.
   8. 13.  8. 16.  0.  0.  0.  0.  1.  6. 15. 11.  0.  0.  0.  1.  8. 13.
  15.  1.  0.  0.  0.  9. 16. 16.  5.  0.  0.  0.  0.  3. 13. 16. 16. 11.
   5.  0.  0.  0.  0.  3. 11. 16.  9.  0.]
 [ 0.  0.  7. 15. 13.  1.  0.  0.  0.  8. 13.  6. 15.  4.  0.  0.  0.  2.
   1. 13. 13.  0.  0.  0.  0.  0.  2. 15. 11.  1.  0.  0.  0.  0.  0.  1.
  12. 12.  1.  0.  0.  0.  0.  0.  1. 10.  8.  0.  0.  0.  8.  4.  5. 14.
   9.  0.  0.  0.  7. 13. 13.  9.  0.  0.]
 [ 0.  0.  0.  1. 11.  0.  0.  0.  0.  0.  0.  7.  8.  0.  0.  0.  0.  0.
   1. 13.  6.  2.  2.  0.  0.  0.  7. 15.  0.  9.  8.  0.  0.  5. 16. 10.
   0. 16.  6.  0.  0.  4. 15. 16. 13. 16.  1.  0.  0.  0.  0.  3. 15. 10.
   0.  0.  0.  0.  0.  2. 16.  4.  0.  0.]]
The targets for the first 5 entries: [0 1 2 3 4]
The targets for the first 5 binary entries: (0, 1, 0, 1, 0)

We can then use the restricted data and targets datasets as input to our function, assuming your function binary_digit_clf() was in the p39.py:

confuse_mx = p39.binary_digit_clf(bd,bt,test_size=0.95)
print(f'Confusion matrix:\n{confuse_mx}')
disp = metrics.ConfusionMatrixDisplay(confusion_matrix=confuse_mx)
#Use a different color map since the default is garish:
disp.plot(cmap = "Purples")
plt.title("Logistic Regression Classifier for Binary Digits")
plt.show()
which will print:
Confusion matrix:
[[172   0]
 [  4 166]]
and display:

Another example with the same data, but different size for the data reserved for testing:

confuse_mx = p39.binary_digit_clf(bd,bt)
print(f'Confusion matrix:\n{confuse_mx}')
would print:
Confusion matrix:
[[43  0]
 [ 0 47]]

Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

Program 40: Enrollments by Courses.Due noon, Monday, 15 November.
(Learning Objective: to reinforce categorical encoding and aggregation techniques.)

Building on Program 24 and Program 28, write a function, byCourses(), that takes a DataFrame that contains students' names, number of credits completed, and current courses (a string with the course names separated by ' '), and returns the resulting column:
  • No other rows or columns should be included in the DataFrame.

    For example, assuming your function byCourses() was in the p40.py:

    classDF = pd.DataFrame({'Name': ["Ana","Bao","Cara","Dara","Ella","Fatima"],\
                              '# Credits': [45,50,80,115,30,90],\
                              'Current Courses': ["csci160 csci235 math160 jpn201",\
                                                  "csci160 csci235 cla101 germn241",\
                                                  "csci265 csci335 csci39542 germn241",\
                                                  "csci49362 csci499",\
                                                  "csci150 csci235 math160",\
                                                  "csci335 csci39542 cla101 dan102"]})
    print(f'Starting df:\n {classDF}\n')
    print(f'CS courses:\n {p40.byCourses(classDF)}')
    Would give output:
    Starting df:
          Name  # Credits                     Current Courses
    0     Ana         45      csci160 csci235 math160 jpn201
    1     Bao         50     csci160 csci235 cla101 germn241
    2    Cara         80  csci265 csci335 csci39542 germn241
    3    Dara        115                   csci49362 csci499
    4    Ella         30             csci150 csci235 math160
    5  Fatima         90     csci335 csci39542 cla101 dan102
    
    CS courses:
    csci150      1
    csci160      2
    csci235      3
    csci265      1
    csci335      2
    csci39542    2
    csci49362    1
    csci499      1

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Hints:

    Program 41: Classifier Misses.Due noon, Tuesday, 16 November.
    (Learning Objective: to refresh matrix manipulation skills and strengthen understanding of multiway classification results.)

    In Lectures #20 and #21 and
    DS 100: Chapter 24, we used multiway classification on the canonical iris data set. For this program, write a function that will take a confusion matrix from such an analysis and return the most common misclassified item in the matrix.

    In DS 100: Chapter 24, the confusion matrix computed for the iris dataset was:

    The first row has no entries not in its diagonal entry and a total of 19 members of the class, so has value 0. The second column has 2 elements mislabeled. The third column has 0 mislabeled. So, your function would return the second class, the one labeled iris-versicolor.

    For example, examining a confusion matrix for the MNIST digits dataset and assuming clf_misses is in p41 and the appropriate libraries are loaded:

    digits = datasets.load_digits()
    n_samples = len(digits.images)
    data = digits.images.reshape((n_samples, -1))
    X_train, X_test, y_train, y_test = train_test_split(data, digits.target,random_state=42, test_size=.75)
    clf = LogisticRegression(max_iter=1000)
    clf.fit(X_train,y_train)
    y_predict = clf.predict(X_test)
    confuse_mx = metrics.confusion_matrix(y_test,y_predict)
    disp = metrics.ConfusionMatrixDisplay(confusion_matrix=confuse_mx)
    disp.plot(cmap = "Purples")
    plt.title("Logistic Regression Classifier for Digits")
    plt.show()
    print(f'The most misclassified class is {p41.clf_misses(confuse_mx)}.')

    would display:

    and print:

    The most misclassified class is 3.
    Hints:

    Program 42: Ticket Predictor.Due noon, Thursday, 18 November.
    (Learning Objective: to use the tools provided by sklearn to create a support vector machine.)

    For this program, we will train a linear regression classifier to predict the number of tickets a vehicle is likely to receive. The data is first cleaned using the functions you wrote in Program 38 to use standardized spellings of color names as well as one of three vehicle classes. The first function adds indicators for the specified categorical featuers. The second function trains a linear regression classifier on the data and returns the accuracy (score) of your classifier on the test data, as well as the classifier.

    For example, let's clean the dataset using the functions from Program 38, as we did in lecture on Parking_Q1_2021_Lexington.csv:

    import numpy as np
    import pandas as pd
    import p38
    import p42
    from sklearn.model_selection import train_test_split
    from sklearn.linear_model import LinearRegression
    
    df = pd.read_csv('Parking_Q1_2021_Lexington.csv')
    #Focus on features about vehicles:
    df = df[['Plate ID','Plate Type','Registration State','Issue Date','Vehicle Color']]
    #Drop rows that are missing info:
    df = df.dropna()
    print(f'Your file contains {len(df)} parking violations.')
    #Clean the data, using the functions written for P38:
    df['Plate Type'] = df['Plate Type'].apply(p38.cleanReg)
    df['Vehicle Color'] = df['Vehicle Color'].apply(p38.cleanColor)
    #Count tickets for each vehicle:
    newDF =  df.groupby('Plate ID').agg(NumTickets =
        pd.NamedAgg(column = 'Plate ID', aggfunc = 'count'),
        Registration = pd.NamedAgg(column = 'Plate Type', aggfunc = 'first'),
        State = pd.NamedAgg(column = 'Registration State', aggfunc = 'first'),
        Color = pd.NamedAgg(column = 'Vehicle Color', aggfunc = 'first')
    )
    print(newDF)
    
    which will print:
    Your file contains 20589 parking violations.
              NumTickets Registration State  Color
    Plate ID
    00356R2            1          PAS    TX  WHITE
    004LSM             1          PAS    TN  OTHER
    00574R7            1          PAS    TX  WHITE
    0064NQD            1          PAS    DP  BLACK
    0107NQD            1          PAS    DP   GRAY
    ...              ...          ...   ...    ...
    ZRB1864            1          PAS    PA  WHITE
    ZSA6859            1          PAS    PA   GRAY
    ZSE1922            1          PAS    PA  WHITE
    ZWF62E             1          PAS    NJ  OTHER
    ZWZ35J             1          PAS    NJ  OTHER
          

    We can then use the cleaned data, assuming your functions are in the p42.py:

    newDF = p42.addIndicators(newDF)
    print(newDF)
    will add the indicator variables:
              NumTickets  Registration_OTHER  ...  State_WI  State_WV
    Plate ID                                  ...
    00356R2            1                   0  ...         0         0
    004LSM             1                   0  ...         0         0
    00574R7            1                   0  ...         0         0
    0064NQD            1                   0  ...         0         0
    0107NQD            1                   0  ...         0         0
    ...              ...                 ...  ...       ...       ...
    ZRB1864            1                   0  ...         0         0
    ZSA6859            1                   0  ...         0         0
    ZSE1922            1                   0  ...         0         0
    ZWF62E             1                   0  ...         0         0
    ZWZ35J             1                   0  ...         0         0

    We can then use this in the second function to fit a classifier that will predict tickets based on characteristics of the vehicle:

    xes = ['State_NY','Registration_OTHER', 'Registration_PAS', 'Color_GRAY', 'Color_OTHER', 'Color_WHITE']
    y_col = 'NumTickets'
    sc,clf = p42.build_clf(newDF, xes)
    print(f'Score is {sc}.')
    predicted = clf.predict([[1,0,0,0,0,1]])[0]
    print(f'NY state, white commercial vehicle (encoded as: [1,0,0,0,0,1])\n\twill get {predicted:.2f} tickets.')
    predicted = clf.predict([[1,0,1,1,0,0]])[0]
    print(f'NY state, gray passenger vehicle (encoded as: [1,0,1,1,0,0])\n\twill get {predicted:.2f} tickets.')
    
    which will print:
    Score is 0.04310334739677757.
    NY state, white commercial vehicle (encoded as: [1,0,0,0,0,1])
    	will get 2.48 tickets.
    NY state, gray passenger vehicle (encoded as: [1,0,1,1,0,0])
    	will get 1.16 tickets.

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Program 43: Moving.Due noon, Friday, 19 November.
    (Learning Objective: to reinforce linear algebra concepts from prerequisite course and build corresponding facility in Python.)

    In Lecture #21 and Explained Visually, we reviewed matrices, eigenvectors and eigenvalues.

    For this program, write a function that extends the moving between information about any number of states, starting populations, and the number of years, and returns an array of the ending population of each state.

    For example, if you have that people move from the following three states with the probabilities of moving each year:

    If the initial populations are New York: 20 million, California: 40 million, and Texas: 25 million. Then, the transition matrix is:
    t_mx = np.array([[.7, .07, .1],
             [.25,.9,.15],
             [.05,.03,.75]])
    
    and the starting populations (in millions) are: pop0 = np.array([20, 40, 25]) we can compute the population after 1 years, by taking the intial populations, computing what fraction move to each of the other states:
    pop1 = t_mx @ pop0
    print(f'Population of each state after 1 year: {pop1}.')
    which is
    Population of each state after 1 year: [19.3  44.75 20.95].

    Similarly, the population after 2 years, can be found by multiplying the population after 1 year by the transition matrix. More generally, the population after k+1 years can be found by multiplying the populations at year k by the transition matrix. The steady state population can be found by first finding the eigenvector corresponding to the maximal eigenvalue of 1, scaling it so its entries sum to 1 (i.e. divide through by its sum) to get percentages. And then, multiplying the percentages by the total population.

    For example, continuing from above, and assuming your functions are in p43 and the appropriate libraries are loaded:

    pop1 = p43.moving(t_mx, pop0)
    print(f'Population of each state after 1 year: {pop1}')
    pop100 = p43.moving(t_mx, pop0, num_years=100)
    print(f'Population of each state after 100 years: {pop1}')
    pop_steady = p43.steadyState(t_mx, pop0)
    print(f'Steady state population: {pop_steady}')
    and print:
    Population of each state after 1 year: [19.3  44.75 20.95]
    Population of each state after 100 years: [16.91747573 57.76699029 10.31553398]
    Steady state population: [16.91747573 57.76699029 10.31553398]

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Hints:
    • The module numpy has many useful functions for computing determinants and eigenvalues in its linear algebra package. Note that the * for matrices is element-wise (not regular matrix multiplication). To multiply two matrices, a and b together use a @ b.
    • A useful function to raise a matrix to a power, is numpy.linalg.matrix_power described in the numpy API reference.
    • The numpy.linalg.eig function returns an array of eigenvalues and the associated eigenvectors as columns of an array. For the example above,
      import numpy.linalg as LA
      w,v = LA.eig(t_mx)
      print(f'The eigenvalues are: {w} and eigenvectors are:\n{v}.')
      would print:
      The eigenvalues are: [ 1.  -0.2  0.1] and eigenvectors are:
      [[-6.67423812e-01 -7.07106781e-01  2.67261242e-01]
       [-5.72077554e-01  6.87552368e-17 -8.01783726e-01]
       [-4.76731295e-01  7.07106781e-01  5.34522484e-01]].
      The eigenvector for eigenvalue = 1 is the first column (not the first row-- the rows are grouped together since we enter/print 2D arrays by rows). See Lecture #21 notes for scaling the vector to compute the steady state population.

    Program 44: Model Comparison.Due noon, Monday, 22 November.
    (Learning Objective: to build facility with fitting multiclass models.)

    In Lectures #21 and #22 and Chapter 24, we build classifiers for the iris dataset. Write a function that fits a Logistic Regression model and a Support Vector Machine to the same training data and returns the score of each on the same testing data.

    For example, assuming your functions are in p44 and the appropriate libraries are loaded:

    iris = datasets.load_iris()
    l_40,s_40 = p44.compare_clf(iris.data,iris.target,test_size=.4)
    print(f'With a 40% test set, LogReg classifer has score {l_40}.\nSVM classifier had score {s_40}.')
    
    xes = list(range(5,100,5))
    runs = [p44.compare_clf(iris.data,iris.target,test_size=x/100) for x in xes]
    lr_runs, svm_runs = zip(*runs)
    plt.plot(xes, lr_runs, label="LogReg")
    plt.plot(xes, svm_runs, label= "SVM")
    plt.xlabel('Test Size (Percent)')
    plt.ylabel('Score')
    plt.title('Iris Dataset:  Test Size vs Score')
    plt.legend()
    plt.show()
    would print:
    With a 40% test set, LogReg classifer has score 0.9833333333333333.
    SVM classifier had score 0.9833333333333333.
    and display:

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Program 45: Component Retention.Due noon, Tuesday, 23 November.
    (Learning Objective: to employ programming skills to evaluate the number of principal components to use in dimensionality reduction.)

    In Lecture 22 and (also in Chapter 25), we used scree plots to provide a visualization of the captured variance. This assignment asks you to implement two other popular ways of determining the number of dimensions to retain.

    Using the example from the textbook, if arr is
    a = np.array([585.57, 261.06, 166.31,  57.14,  48.16,  39.79,  31.71,  28.91,
          24.23,  22.23,  20.51,  18.96,  17.01,  15.73,   7.72,   4.3 ,
          1.95,   0.04])
    Then cv would be:
    array([0.76, 0.15, 0.06, 0.01, 0.01, 0.  , 0.  , 0.  ,   0.  , 0.  , 0.  ,
          0.  , 0.  , 0.  , 0.  , 0.  , 0.  , 0.  ])
    and the function, capture85(a), would return 2 since the first coordinate captures 76% of the variance which is less than 85%, the first 2 coordinates capture 76 + 15 = 91% of the variance.

    For the second function, again using the example from the textbook, for the array a, the avg would be 75.07, and the function, averageEigenvalue(a) would return 3 since the first three coordinates are larger than the average.

    Note: you should submit a file with only the standard comments at the top, and these two functions. The grading scripts will then import the file for testing.

    Program 46: Digits Components.Due noon, Monday, 29 November.
    (Learning Objective: to strengthen understanding of intrinistic dimensions of data sets via exploration of the classic digits dataset.)

    In Lecture #21, we introduced Principal Components Analysis and the number of components needed to capture the intrinistic dimension of the data set. For this program, write a function that allows the user to explore how many dimensions are needed to see the underlying structure of images from the sklearn digits dataset (inspired by Python Data Science Handbook: Section 5.9 (PCA)).

    Write a function that approximates an image by summing up a fixed number of its components:

    As discussed in Python Data Science Handbook: Section 5.9, we can view the images as sums of the components. For our flattened images, we have 1D arrays of length 64. Here's the first one from the dataset:
    [[ 0.  0.  5. 13.  9.  1.  0.  0.  0.  0. 13. 15. 10. 15.  5.  0.  0.  3. 15.  2.  0. 11.  8.  0.  0.  4. 12.  0.  0.  8.  8.  0.  0.  5.  8.  0.  0.  9.  8.  0.  0.  4. 11.  0.  1. 12.  7.  0.  0.  2. 14.  5. 10. 12.  0.  0.  0.  0.  6. 13. 10.  0.  0.  0.]

    If we let x1 = [1 0 ... 0], x2 = [0 1 0 ... 0], ..., x64 = [0 ... 0 1] (vectors corresponding to the axis), then we can write our images, im = [i1 i2 ... i64], as:

    im = x1*i1 + x2*i2 + ... + x64*i64
            x1*0 + x2*0 + x3*5 + ... + x64*0
    plugging in the values of im into the equation.

    In a similar fashion, we can represent the image in terms of the axis,c1, c2, ... c64, that the PCA analysis returns:

    im = mean + c1*i1 + c2*i2 + ... + c64*i64
    since the axis of PCA are chosen so that the first one captures the most variance, the second the next most, etc. The later axis capture very little variance and likely add litte to the image. (For technical reasons, we include the mean. The reason is similar to when we "center" multidimensional data at 0). This can be very useful for reducing the dimension of the data set, for example, here is the first image from above on the left:


    The next image is the overall mean, and each subsequent image is adding another component to the previous. For this particular scan, the mean plus its first component is enough to see that it's a 0.

    For example, assuming the function is in p46 and the appropriate libraries are loaded:

    from sklearn.decomposition import PCA
    pca = PCA()
    Xproj = pca.fit_transform(digits.data)
    showDigit(pca.mean_, f"Mean for digits")
    plt.imshow(pca.mean_.reshape(8,8),cmap='binary', interpolation='nearest',clim=(0, 16))
    plt.title("Mean for digits")
    plt.show()
    approxAnswer = p46.approxDigits(8,Xproj[1068], pca.mean_, pca.components_)
    plt.imshow(approxAnswer.reshape(8,8),cmap='binary', interpolation='nearest',clim=(0, 16))
    plt.title("mean + 8 components for digits[1068]")
    plt.show()
    would show the mean and summed with the first 8 components for digits[1068]:

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Program 47: Voting MDS.Due noon, Tuesday, 30 November.
    (Learning Objective: to build intuition and strengthen competency with dimensionality reduction methods.)

    In Lecture #22 and DS 100, Chapter 26.3, we explored Principal Components Analysis (PCA) for a US Representatives voting dataset. For this program, we will examine the dataset with Multidimensional Scaling (MDS) for different distance matrices. There are two functions to write:

    For example, assuming your functions are in p47 and the appropriate libraries are loaded:

    from sklearn.manifold import MDS
    from scipy.spatial.distance import cdist
    import p47
    #Helper function to display plots:
    def displayPlot(vote2d,title):
        sns.scatterplot(data = vote2d,x="x", y="y", hue="party",
                        hue_order=['Democrat', 'Republican', 'Libertarian']);
        plt.title(title)
        plt.show()
    df = pd.read_csv('vote_pivot.csv')
    votes = df.drop('member',axis=1).to_numpy()
    legs = pd.read_csv('legislators.csv')[['leg_id','party']]
    #Fit to Euclidean distances:
    md_fit = p47.makeMDS(votes)
    vote2d = p47.makeDisplayDF(df,md_fit,legs)
    displayPlot(vote2d,'MDS of Votes with Eucidean Distances')
    #Fit to Hamming distances:
    md_fit = p47.makeMDS(votes, metric="hamming")
    vote2d = p47.makeDisplayDF(df,md_fit,legs)
    displayPlot(vote2d,'MDS of Votes with Hamming Distances')
    #Fit to Manhattan distances:
    md_fit = p47.makeMDS(votes, metric="cityblock")
    vote2d = p47.makeDisplayDF(df,md_fit,legs)
    displayPlot(vote2d,'MDS of Votes with Manhattan Distances')
    

    would display:

    The above runs used the files from the textbook: vote_pivot.csv and legislators.csv.

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Program 48: Transit Distances.Due noon, Thursday, 2 December.
    (Learning Objective: to gain better insight into non-Euclidean distances via dimensionality reduction methods.)

    This program focuses on travel times in NYC and how well they estimate the aerial (Euclidean) distance between points. To compare various measures of distances, we need several functions:

    • extractMx(df, dropCols = ['Name','Position']): This function has two inputs and returns matrix with all times in minutes.
      • df: a DataFrame. Assumes that the columns specificed by dropCols are columns of df and the remaining entries are expressed as hours and minutes.
      • dropCols: columns containing non-temporal data and to be dropped before converting to numeric values. The default values is ['Name','Position'].
      The function returns the array of numeric values. Each entry is the corresponding number of minutes to the input. For example, if the entry in the DataFrame is 59 mins, then 59 is placed in the corresponding entry of the matrix. Similarly, if the entry in the DataFrame is 2 hours 3 mins, then 123 is placed in the corresponding entry of the matrix.
    • scaleMx(distMx, i=0,j=1): This function has three inputs and returns the matrix distMx scaled by the ith, jth entry.
      • distMx: a distance matrix. Assumes diagonal values are 0, and all other values are numeric and non-negative.
      • i: the x coordinate of the entry to be used to scale the matrix. It has a default value of 0.
      • j: the y coordinate of the entry to be used to scale the matrix. It has a default value of 1.
      The function returns the array scaled by the entry at [0,1]. That is, it divides through all entries by that value. For example, if the specified entry is 59, then all entries in the returned matrix are divided by 59.

    Using Google Maps API, we generated the amount of time it would take to travel between the following landmarks:

    by driving, transit, and walking (files: nyc_landmarks_driving.csv, nyc_landmarks_transit.csv, nyc_landmarks_walking.csv ).

    Each file has the entries listed in hours and minutes. The first function extracts the time from each and returns a matrix of numeric values representing total minutes for each entry. For example, assuming your functions are in p48 and the appropriate libraries are loaded:

    transit = pd.read_csv('nyc_landmarks_transit.csv')
    print(transit)
    transit_mx = p48.extractMx(transit)
    print(transit_mx)
    would print:
                              Name  ...  Hunter College
    0        Empire State Building  ...         21 mins
    1                    Bronx Zoo  ...   1 hour 2 mins
    2   National Lighthouse Museum  ...  1 hour 21 mins
    3  FDR Four Freedom State Park  ...         37 mins
    4                   Citi Field  ...         35 mins
    5                 Coney Island  ...  1 hour 10 mins
    6               Hunter College  ...          0 mins
    
    [7 rows x 9 columns]
    [[  0  55  79  29  40  61  21]
     [ 55   0 105  72  70 106  62]
     [ 99 113   0  83  98 105  81]
     [ 29  71  92   0  41  84  37]
     [ 39  73 105  41   0  95  35]
     [ 59 123  91  83 102   0  70]
     [ 16  67  86  24  35  71   0]]
    If we normalize by the first non-zero entry (0,1) (note that we can't use (0,0), or any (i,i) since it's a distance matrix so would be dividing through by 0):
    transit_normed = p48.scaleMx(transit_mx)
    print(transit_normed)
    would print:
    [[0.         1.         1.43636364 0.52727273 0.72727273 1.10909091 0.38181818]
     [1.         0.         1.90909091 1.30909091 1.27272727 1.92727273  1.12727273]
     [1.8        2.05454545 0.         1.50909091 1.78181818 1.90909091  1.47272727]
     [0.52727273 1.29090909 1.67272727 0.         0.74545455 1.52727273  0.67272727]
     [0.70909091 1.32727273 1.90909091 0.74545455 0.         1.72727273  0.63636364]
     [1.07272727 2.23636364 1.65454545 1.50909091 1.85454545 0.          1.27272727]
     [0.29090909 1.21818182 1.56363636 0.43636364 0.63636364 1.29090909  0.        ]]

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Program 49: Toy Clusters.Due noon, Friday, 3 December.
    (Learning Objective: to build intuition and facility with k-means clustering.)

    In Lecture #25 and Python Data Science Handbook: Section 5.11, we clustered the digits data set using K-means clustering and used t-SNE to improve accuracy. The digits data set is one of 7 toy datasets included in sklearn that can be quickly loaded to try different algorithms.

    For this program, modify the code from lecture to write a function that allows you to run three different preprocessing of a toy dataset ("none", "TSNE", or "MDS"), applies K-Means clustering, takes the mode of each cluster as the predicted label, and then returns the accuracy of the prediction.

    For example, assuming your functions are in p49 and the appropriate libraries are loaded, we can run the function on the relatively small datasets of iris species and wine classifications:

    iris = datasets.load_iris()
    no_preproc = p49.clusterDemo(iris)
    print(f'Iris:  The accuracy with no-preprocessing is {no_preproc}.')
    tsne_proc = p49.clusterDemo(iris, method = "TSNE")
    print(f'Iris: The accuracy with TSNE preprocessing is {tsne_proc}.')
    mds_proc = p49.clusterDemo(iris, method = "MDS")
    print(f'Iris: The accuracy with MDS preprocessing is {mds_proc}.')
    
    wine = datasets.load_wine()
    no_preproc = p49.clusterDemo(wine, n_components = 3, random_state=10)
    print(f'Wine:  The accuracy with no-preprocessing is {no_preproc}.')
    tsne_proc = p49.clusterDemo(wine, n_components = 3, method = "TSNE", random_state=10)
    print(f'Wine: The accuracy with TSNE preprocessing is {tsne_proc}.')
    would print:
    Iris:  The accuracy with no-preprocessing is 0.8933333333333333.
    Iris: The accuracy with TSNE preprocessing is 0.9133333333333333.
    Iris: The accuracy with MDS preprocessing is 0.9.
    Wine:  The accuracy with no-preprocessing is 0.702247191011236.
    Wine: The accuracy with TSNE preprocessing is 0.6797752808988764.

    We can also run on the digits dataset. It's larger and the t-SNE and MDS methods will take a bit of time to return their answers:

    digits = datasets.load_digits()
    no_preproc = p49.clusterDemo(digits, n_clusters = 10, method = "none", random_state=20)
    print(f'Digits: The accuracy with no-preprocessing is {no_preproc}.')
    tsne_proc = p49.clusterDemo(digits, n_clusters = 10, method = "TSNE", random_state=20)
    print(f'Digits: The accuracy with TSNE preprocessing is {tsne_proc}.')
    mds_proc = p49.clusterDemo(digits, n_clusters = 10, method = "MDS", random_state=20)
    print(f'Digits: The accuracy with MDS preprocessing is {mds_proc}.')
    would print:
    Digits: The accuracy with no-preprocessing is 0.7946577629382304.
    Digits: The accuracy with TSNE preprocessing is 0.9432387312186978.
    Digits: The accuracy with MDS preprocessing is 0.676126878130217.

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.

    Hints:

    Program 50: 4-Coloring.Due noon, Tuesday, 7 December.
    (Learning Objective: to apply k-means clustering to larger datasets.)

    In Lecture #25 and Python Data Science Handbook: Section 5.11, we used K-Means clustering to display an image using 16 colors. The color values of the image were treated as 3D vectors, and the chosen colors were the centers of the clusters of those values. For this program, write a function that takes an image and the number of clusters and returns the image recolored with cluster centers.

    For example, assuming your functions are in p50 and the appropriate libraries are loaded. Here is the example from lecture:

    from sklearn.datasets import load_sample_image
    china = load_sample_image("china.jpg")
    china_4col = p50.coloring(china)
    fig, ax = plt.subplots(1, 2, figsize=(16, 6),
                           subplot_kw=dict(xticks=[], yticks=[]))
    fig.subplots_adjust(wspace=0.05)
    ax[0].imshow(china)
    ax[0].set_title('Original Image', size=16)
    ax[1].imshow(china_4col)
    ax[1].set_title('4-color Image', size=16);
    plt.show()
    which displays:

    We can also run it on hunterFlag.jpg (since it's a larger file it will take a bit longer to run):

    hunter = plt.imread('hunterFlag.jpg')
    hunter_4col = p50.coloring(hunter, random_state = 70)
    hunter_2col = p50.coloring(hunter, n_clusters = 2, random_state = 70)
    fig, ax = plt.subplots(1, 3, figsize=(16, 6),
                           subplot_kw=dict(xticks=[], yticks=[]))
    fig.subplots_adjust(wspace=0.05)
    ax[0].imshow(hunter)
    ax[0].set_title('Original Image', size=16)
    ax[1].imshow(hunter_4col)
    ax[1].set_title('4-color Image', size=16);
    ax[2].imshow(hunter_2col)
    ax[2].set_title('2-color Image', size=16);
    plt.show()
    and display:

    Note: you should submit a file with only the standard comments at the top, this function, and any helper functions you have written. The grading scripts will then import the file for testing.








    Project

    The required final project synthesizes the skills acquired in the course to analyze and visualize data on a topic of your choosing. It is your chance to demonstrate what you have learned, your creativity, and a project that you are passionate about. The intended audience for your project is your classmates as well as tech recruiters and potential employers.

    Milestones

    The project is broken down into smaller pieces that must be submitted by the deadlines below. For details of each milestone, see the links. The project is worth 25% of the final grade. The point breakdown is listed in the right hand column.

    Deadline:Deliverables:Points:Submitted Via:
    Wednesday, 6 October, noon Pre-Proposal 15 Gradescope
    Wednesday, 20 October, 3 November, noon Title & Proposal 20 30 Blackboard Gradescope
    Wednesday, 27 October, noon Peer Review #1 15 Blackboard
    Wednesday, 3 November 10 November, noon Check-in #1 (Data Collection) 20 Gradescope
    Wednesday, 10 November 17 November, noon Check-in #2 (Analysis) 20 Gradescope
    Wednesday, 17 November 24 November, noon Check-in #3 (Visualization) 20 Gradescope
    Wednesday, 1 December, noon Draft Abstract & Website 25 Gradescope
    Monday, 6 December, noon Peer Review #2 15 20 Gradescope
    Thursday, 9 December, noon Abstract 25 Gradescope
    Friday, 10 December, noon Complete Project & Website 50 Gradescope
    Monday, 13 December, noonProject Video
    Presentation Slides
    25 Gradescope
    Total Points: 250




    Pre-Proposal

    This pre-proposal is meant to guide you as you brainstorm about your project. It will also lead up to a more formal and structured project proposal later on. The window for submitting pre-proposals opens Wednesday, 29 September. If you would like feedback and the opportunity to resubmit for a higher grade, submit early in the window. Feel free to re-submit as many times as you like, up until the assignment deadline. The instructing team will work hard to give feedback on your submission as quickly as possible, and we will grade them in the order they were received.

    In the pre-proposal, answer each question with 1 to 2 sentences:


    Title & Proposal

    The title and proposal serve multiple purposes: they provide a framework to structure the proposed work, can form the overview on your project website, and make the basis of an elevator pitch when asked in interviews to explain your project from your digital portfolio.

    The structure echos that of the pre-proposal; an excellent way to start is to expand your pre-proposal to the template below, incorporating the feedback from the pre-proposal. Submission is via Blackboard Turnitin, and the file formats accepted include PDF, HTML, DOC, and RTF files as well as plain text. Your project proposal will be evaluated by the rubric below by three other students in the course.

    Your file should include the following:

    Peer Review #1

    The proposal and titled will be graded following the rubric below by three other students.

    Grading Rubric for First Peer Review:

    1. Does the title accurately capture the planned project?
    2. Read through the proposal and describe it in your own words in 2 sentences.
    3. Does the objective section clearly describe the project? What would you add to make it clearer or more reflective of the project?
    4. Why is this project important or interesting? Include the reasons from the proposal. If you found none, provide two reasons.
    5. Was the explanation of key terms sufficient?
    6. Did the links provided work? Was the data chosen well and sufficient to accomplish the objectives above?
    7. Are the libraries and dependencies appropriate for the project?
    8. Are the planned outputs appropriate for the project?
    9. Were security and privacy considerations handled sufficiently?
    10. Based on their success metric, do you think the proposed solution will be successful? Why or why not?

    Check-in #1 (Data Collection)

    There are periodic check-ins to make sure that you are making progress on your project. All ask for the following information: In addition, the first one focuses on data collection and includes

    Check-in #2 (Analysis)

    There are periodic check-ins to make sure that you are making progress on your project. All ask for the following information: In addition, the second one focuses on progress you have made on your data analysis and includes the results of the initial analysis for each data set.

    Check-in #3 (Visualization)

    There are periodic check-ins to make sure that you are making progress on your project. All ask for the following information: In addition, the third one focuses on the visualizations and includes:

    Draft Abstract & Website

    This is done via Gradescope. You will be sent an invitation to a subcourse (based on your theme you specified in the last check-in). If you did not submit a theme, you will be automatically placed in the "general" subcourse. The following are requested:

    Peer Review #2

    This is done via Gradescope. You will be sent an invitation to a subcourse (based on your theme). The themes are:

    Your access to the subcourse will be changed to Reader/TA after the drafts have been submitted. You need to complete peer reviews for 4 other students. The system logs all entries and grades will be assigned based on the automatic entries. The points for this portion are given for each peer review you complete (up to 4).

    For the peer review, as covered in Lecture #25 (video available on Blackboard):

    1. Log into your themed Gradescope course.
    2. Click on Draft Abstract & Website under the list of ACTIVE ASSIGNMENTS:


    3. The next screen shows the Grading Dashboard:


    4. Click on 1: Draft Abstract. It will show either a list of submissions or a grading screen. If the former, click on the first name to get to a submission. On the right hand side of the window, you will see a grading rubric with check boxes for 4 different reviewers. Scroll down to the first reviewer that's not be started. In this case, it's Reviewer 2. If all 4 reviews have been completed for this submission or its the work you submitted, click on the Next button to see another review.


    5. Read the abstract on the left, and check the corresponding boxes on the right. Append any comments to the end of the comment box at the bottom of the menu. Then click the Next button.
    6. If you have finished less than 4 reviews of abstracts, repeat from Step 4. If you have finished 4 reviews of abstract, go back to the Grading Dashboard and fill out 4 reviews of websites.

    Be kind and constructive with your comments!

    Abstract

    This is the final version of your abstract and associated information. It should include 2-3 sentences describing the project and your results.

    This portion of the project is submitted via Gradescope as a text file. It will first be screened by an autograder to make sure the required fields are included. After the submission deadline, each abstract will also be read and the remaining points assigned manually. The required fields follow those of the earlier draft submission. For the autograder to find each field, precede your entry by the field name followed by a colon. For example, for the title, your file should include:

    
    Title: YOURTITLEHERE
    
    The autograder is expecting the following fields in your text file:

    Complete Project & Website

    This part of your project is submitted via Gradescope as a .py file that contains in the introductory comments the website (preceded by URL:) and the code you wrote for your project in the body of the file.

    The autograder will check for the python file and that includes the title, the resources, and the URL of your website. After the submission deadline, the code and the website will be graded.

    Your code should include documentation about what each function does and details about the data format and sources.

    The project must be submitted as a webpage (use google sites or other pre-built if you're not comfortable writing html). The project website must include:

    Presentation Slides

    For the last part of the project, include two slides that serve as a graphical overview ("lightning talk" slides) of your project. You should submit to Gradescope a pdf file that contains two slides that summarize your project: