1 | def tokenize(myDocument): |
2 | |
3 | """ This separates all the words in the passed-in document and puts them into |
4 | a list of strings. The algorithm to doing this would look at specific phrases |
5 | and divide up the words via white spaces, etc. Running this function would |
6 | return a list like, ['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', |
7 | 'fence'].""" |
8 | pass |
9 | |
10 | def bestMatch(word): |
11 | |
12 | """ This functions finds the 10 best matches for a particular word according |
13 | to letter used in the word and the order in which they are used. For example, |
14 | a passed-in word like, "mtch", would return a list of words like ['match', 'mitch', etc]. |
15 | It would also find them by using the sub-sequence functions, and use phonetic |
16 | matching to find the words that sound like it.""" |
17 | pass |
18 | |
19 | def addWord(word): |
20 | |
21 | """ Adds the word into the global dictionary list via append command.""" |
22 | pass |
23 | |
24 | def removeWord(word): |
25 | |
26 | """ Finds the index to the word and removes the it from the global dictionary |
27 | list via remove command.""" |
28 | pass |