Python clinic day 1: Text processing

Na-Rae Han (naraehan@pitt.edu), 2017-07-12, Pittsburgh NEH Institute “Make Your Edition”

Preparation

Data

Jupyter tips

  • Click + to create a new cell, ► to run
  • Alt+ENTER to run cell, create a new cell below
  • Shift+ENTER to run cell, go to next cell

More on https://www.cheatography.com/weidadeyue/cheat-sheets/jupyter-notebook/

The very basics

First code

  • Printing a string, using print().
In [ ]:
print("hello, world!")

Strings

  • String type objects are enclosed in quotation marks (" or ').
  • + is a concatenation operator.
  • Below, greet is a variable name assigned to a string value.
  • Here we are not explicitly printing out; instead, a string value is returned.
In [ ]:
greet = "Hello, world!"
greet = greet + " I come in peace."
greet
  • String methods such as .upper(), .lower() transform a string.
  • Rather than changing the original variable, the commands return a new string value.
In [ ]:
greet.upper()
  • Some string methods return a boolean value (True/False)
In [ ]:
# try .isupper(), .isalnum(), .startswith('he')
'hello'.islower()
  • len() returns the length of a string in the # of characters.
In [ ]:
len(greet)
  • in tests substring-hood between two strings.
In [ ]:
'he' in 'hello'

Numbers

  • Integers and floats are written without quotes.
  • You can use algebraic operations such as +, -, * and / with numbers.
In [ ]:
num1 = 5678
num2 = 3.141592
result = num1 / num2
print(num1, "divided by", num2, "is", result)  # can print multiple things! 

Lists

  • Lists are enclosed in [ ], with elements separated with commas. Lists can contain strings, numbers, and more.
  • As with string, you can use len() to get the size of a list.
  • As with string, you can use in to see whether an element is in a list.
In [ ]:
li = ['red', 'blue', 'green', 'black', 'white', 'pink']
len(li)
In [ ]:
# Try logical operators not, and, or
'mauve' in li
  • A list can be indexed through li[i]. Python indexes starts with 0.
  • A list can be sliced: li[3:5] returns a sub-list beginning with index 3 up to and not including index 5.
In [ ]:
# Try [0], [2], [-1], [3:5], [3:], [:5]
li[4]

for loop

  • Using a for loop, you can loop through a list of items, applying the same set of operations to each element.
  • The embedded code block is marked with indentation.
In [ ]:
for x in li :
    print(x, "is", len(x), "characters long.")
print("Done!")

List comprehension

  • List comprehension builds a new list from an existing list.
  • You can filter to include only certain elements, and you can apply transformationa in the process.
  • Try: .upper(), len(), +'ish'
In [ ]:
# filter
[x for x in li if x.endswith('e')]
In [ ]:
# transform
[x+'ish' for x in li]
In [ ]:
# filter and transform
[x.upper() for x in li if len(x)>=5]

Dictionaries

  • Dictionaries hold key:value mappings.
  • len() on dictionary returns the number of keys.
  • Looping over a dictionary means looping over its keys.
In [ ]:
di = {'Homer':35, 'Marge':35, 'Bart':10, 'Lisa':8}
di['Lisa']
In [ ]:
# 20 years-old or younger. x is bound to keys. 
[x for x in di if di[x] <= 20]
In [ ]:
len(di)

Using NLTK

In [ ]:
import nltk
  • Let's first download some data files. In the doanloader window, Models > punkt > Download.
  • If server is overloaded, download this punkt.zip file and unzip it as ~/nltk_data/tokenizers/punkt
In [ ]:
nltk.download()
In [ ]:
# Tokenizing function: turns a text (a single string) into a list of word & symbol tokens
nltk.word_tokenize(greet)
In [ ]:
help(nltk.word_tokenize)
In [ ]:
sent = "You haven't seen Star Wars...?"
nltk.word_tokenize(sent)
  • nltk.FreqDist() is is another useful NLTK function.
  • It builds a frequency count dictionary from a list.
In [ ]:
# First "Rose" is capitalized. How to lowercase? 
sent = 'Rose is a rose is a rose is a rose.'
toks = nltk.word_tokenize(sent)
print(toks)
In [ ]:
freq = nltk.FreqDist(toks)
freq
In [ ]:
freq.most_common(3)
In [ ]:
freq['rose']
In [ ]:
len(freq)

Processing a single text file

Reading in a text file

  • open(filename).read() opens a text file and reads in the content as a single continuous string.
In [ ]:
myfile = 'C:/Users/narae/Desktop/inaugural/1789-Washington.txt'  # Use your own userid; Mac users should omit C:
wtxt = open(myfile).read()
print(wtxt)
In [ ]:
len(wtxt)     # Number of characters in text
In [ ]:
'fellow citizens' in wtxt  # phrase as a substring. try "Americans"
In [ ]:
'th' in wtxt

Tokenize text, compile frequency count

In [ ]:
# Turn off/on pretty printing (prints too many lines)
%pprint    
In [ ]:
# Tokenize text
nltk.word_tokenize(wtxt)
In [ ]:
wtokens = nltk.word_tokenize(wtxt)
len(wtokens)     # Number of words in text
In [ ]:
# Build a dictionary of frequency count
wfreq = nltk.FreqDist(wtokens)
wfreq['the']
In [ ]:
wfreq['we']
In [ ]:
len(wfreq)      # Number of unique words in text
In [ ]:
wfreq.most_common(40)     # 40 most common words

More tomorrow

  • How long are Washington’s sentences on average?
  • Which long words did he use, and how frequent were they?
  • Processing the entire Inaugural Address corpus
    • Which inaugural speech was the longest? The shortest?
    • Which presidents favored long sentences?

All answered in Python Clinic Day 2: Corpus Processing