<86>Apr 21 03:45:08 userdel[62674]: delete user 'rooter' <86>Apr 21 03:45:08 groupadd[62785]: group added to /etc/group: name=rooter, GID=585 <86>Apr 21 03:45:08 groupadd[62785]: group added to /etc/gshadow: name=rooter <86>Apr 21 03:45:08 groupadd[62785]: new group: name=rooter, GID=585 <86>Apr 21 03:45:08 useradd[62833]: new user: name=rooter, UID=585, GID=585, home=/root, shell=/bin/bash <86>Apr 21 03:45:08 userdel[62948]: delete user 'builder' <86>Apr 21 03:45:08 userdel[62948]: removed group 'builder' owned by 'builder' <86>Apr 21 03:45:08 userdel[62948]: removed shadow group 'builder' owned by 'builder' <86>Apr 21 03:45:08 groupadd[63090]: group added to /etc/group: name=builder, GID=586 <86>Apr 21 03:45:08 groupadd[63090]: group added to /etc/gshadow: name=builder <86>Apr 21 03:45:08 groupadd[63090]: new group: name=builder, GID=586 <86>Apr 21 03:45:08 useradd[63146]: new user: name=builder, UID=586, GID=586, home=/usr/src, shell=/bin/bash <13>Apr 21 03:45:11 rpmi: libopenblas-0.2.14-alt1.git20150324 1433158855 installed <13>Apr 21 03:45:11 rpmi: libtcl-8.5.9-alt2 1351878901 installed <13>Apr 21 03:45:11 rpmi: libexpat-2.2.4-alt0.M80P.1 1503871120 installed <13>Apr 21 03:45:11 rpmi: libyaml2-0.1.6-alt1 1397147705 installed <13>Apr 21 03:45:11 rpmi: libgdbm-1.8.3-alt10 1454943313 installed <13>Apr 21 03:45:11 rpmi: tcl-8.5.9-alt2 1351878901 installed <13>Apr 21 03:45:11 rpmi: libnumpy-py3-1:1.12.1-alt0.M80P.1 1496160663 installed <13>Apr 21 03:45:11 rpmi: libnumpy-1:1.12.1-alt0.M80P.1 1496160663 installed <13>Apr 21 03:45:12 rpmi: libxblas-1.0.248-alt1 1322010716 installed <13>Apr 21 03:45:12 rpmi: libquadmath0-5.3.1-alt3.M80P.1 p8+225520.100.3.1 1553688800 installed <13>Apr 21 03:45:12 rpmi: libgfortran3-5.3.1-alt3.M80P.1 p8+225520.100.3.1 1553688800 installed <13>Apr 21 03:45:12 rpmi: liblapack-1:3.5.0-alt1 1401382194 installed <13>Apr 21 03:45:12 rpmi: libpng15-1.5.28-alt1 1484572014 installed <13>Apr 21 03:45:12 rpmi: libgraphite2-1.3.10-alt0.M80P.1 1496411360 installed <13>Apr 21 03:45:12 rpmi: libX11-locales-3:1.6.3-alt1 1431956885 installed <13>Apr 21 03:45:12 rpmi: libXdmcp-1.1.1-alt1 1334617699 installed <13>Apr 21 03:45:12 rpmi: libXau-1.0.8-alt1 1369565807 installed <13>Apr 21 03:45:12 rpmi: libxcb-1.12-alt2 p8.218219.300 1545313310 installed <13>Apr 21 03:45:12 rpmi: libX11-3:1.6.3-alt1 1431956911 installed <13>Apr 21 03:45:12 rpmi: libXrender-0.9.8-alt1 1371312110 installed <13>Apr 21 03:45:12 rpmi: libtinfo-devel-5.9-alt8 1456756459 installed <13>Apr 21 03:45:12 rpmi: libncurses-devel-5.9-alt8 1456756459 installed <13>Apr 21 03:45:12 rpmi: python-modules-curses-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:12 rpmi: libverto-0.2.6-alt1_6 1455633234 installed <13>Apr 21 03:45:12 rpmi: libkeyutils-1.5.10-alt0.M80P.2 p8+216694.100.6.1 1547827915 installed <13>Apr 21 03:45:12 rpmi: libcom_err-1.42.13-alt2 1449075846 installed <13>Apr 21 03:45:12 rpmi: ca-certificates-2016.02.25-alt1 1462368370 installed <13>Apr 21 03:45:12 rpmi: libcrypto10-1.0.2n-alt0.M80P.1 1512766129 installed <13>Apr 21 03:45:12 rpmi: libssl10-1.0.2n-alt0.M80P.1 1512766129 installed <13>Apr 21 03:45:12 rpmi: libharfbuzz-1.6.3-alt0.M80P.1 1509918814 installed <13>Apr 21 03:45:12 rpmi: libfreetype-2.8-alt0.M80P.3 1505462817 installed <13>Apr 21 03:45:12 rpmi: fontconfig-2.12.6-alt1.M80P.1 1506008910 installed Updating fonts cache: <29>Apr 21 03:45:13 fontconfig: Updating fonts cache: succeeded [ DONE ] <13>Apr 21 03:45:13 rpmi: libXft-2.3.2-alt1 1409902650 installed <13>Apr 21 03:45:13 rpmi: libtk-8.5.9-alt3 1308047279 installed <13>Apr 21 03:45:13 rpmi: tk-8.5.9-alt3 1308047279 installed <86>Apr 21 03:45:13 groupadd[71620]: group added to /etc/group: name=_keytab, GID=499 <86>Apr 21 03:45:13 groupadd[71620]: group added to /etc/gshadow: name=_keytab <86>Apr 21 03:45:13 groupadd[71620]: new group: name=_keytab, GID=499 <13>Apr 21 03:45:13 rpmi: libkrb5-1.14.6-alt1.M80P.1 1525355673 installed <13>Apr 21 03:45:14 rpmi: python3-base-3.5.4-alt2.M80P.1 1527753911 installed <13>Apr 21 03:45:14 rpmi: python-modules-compiler-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:14 rpmi: python3-module-py-1.4.34-alt0.M80P.1 1503506764 installed <13>Apr 21 03:45:14 rpmi: python3-3.5.4-alt2.M80P.1 1527753911 installed <13>Apr 21 03:45:14 rpmi: python-modules-email-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:14 rpmi: rpm-build-python3-0.1.10.10-alt1.M80P.1 1530521451 installed <13>Apr 21 03:45:14 rpmi: python3-module-yaml-3.11-alt1.hg20141128.1 1459664840 installed <13>Apr 21 03:45:14 rpmi: python-modules-unittest-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:14 rpmi: python3-module-pytest-3.2.1-alt0.M80P.1 1503499784 installed <13>Apr 21 03:45:14 rpmi: python3-module-setuptools-1:18.5-alt0.M80P.1 1497527461 installed <13>Apr 21 03:45:15 rpmi: python3-module-numpy-1:1.12.1-alt0.M80P.1 1496160663 installed <13>Apr 21 03:45:15 rpmi: python3-module-numpy-testing-1:1.12.1-alt0.M80P.1 1496160663 installed <13>Apr 21 03:45:15 rpmi: python-modules-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-ctypes-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-encodings-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-multiprocessing-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-logging-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-tools-2to3-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-xml-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-hotshot-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-bsddb-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-dev-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-module-py-1.4.34-alt0.M80P.1 1503506764 installed <13>Apr 21 03:45:15 rpmi: python-modules-json-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-modules-tkinter-2.7.11-alt6.M80P.1 1527682470 installed <13>Apr 21 03:45:15 rpmi: python-module-yaml-3.11-alt1.hg20141128.1 1459664840 installed <13>Apr 21 03:45:15 rpmi: python-module-pytest-3.2.1-alt0.M80P.1 1503499784 installed <13>Apr 21 03:45:16 rpmi: python-module-numpy-1:1.12.1-alt0.M80P.1 1496160663 installed <13>Apr 21 03:45:16 rpmi: python-module-numpy-testing-1:1.12.1-alt0.M80P.1 1496160663 installed <13>Apr 21 03:45:16 rpmi: python-module-setuptools-1:18.5-alt0.M80P.1 1497527461 installed Installing python-module-nltk-3.0.1-alt1.1.1.src.rpm Building target platforms: x86_64 Building for target x86_64 Executing(%prep): /bin/sh -e /usr/src/tmp/rpm-tmp.91571 + umask 022 + /bin/mkdir -p /usr/src/RPM/BUILD + cd /usr/src/RPM/BUILD + cd /usr/src/RPM/BUILD + rm -rf python-module-nltk-3.0.1 + echo 'Source #0 (python-module-nltk-3.0.1.tar):' Source #0 (python-module-nltk-3.0.1.tar): + /bin/tar -xf /usr/src/RPM/SOURCES/python-module-nltk-3.0.1.tar + cd python-module-nltk-3.0.1 + /bin/chmod -c -Rf u+rwX,go-w . + rm -rvf nltk/yaml/ + tar xf /usr/src/RPM/SOURCES/nltk_contrib-3.0.1.tar + cp -fR . ../python3 + sed -i 's|u'\''||' ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Tables.py ++ find ../python3 -type f -name '*.py' ++ grep -v 'Tables\.py' + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/setup.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/setup.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/setup.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/setup-eggs.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/setup-eggs.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/setup-eggs.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/wals.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/wals.py --- ../python3/nltk_contrib/nltk_contrib/wals.py (original) +++ ../python3/nltk_contrib/nltk_contrib/wals.py (refactored) @@ -79,13 +79,13 @@ def open_csv(filename, remove_header=True): filename = os.path.join(data_dir, filename + '.' + file_ext) wals_file = csv.reader(open(filename, 'r'), dialect=self.dialect) - if remove_header: wals_file.next() + if remove_header: next(wals_file) for row in wals_file: - yield [unicode(cell, encoding) for cell in row] + yield [str(cell, encoding) for cell in row] def map_fields(vectors, fields): for vector in vectors: - yield dict(zip(fields, vector)) + yield dict(list(zip(fields, vector))) # Features self.features = dict((f['id'], f) for f in @@ -100,14 +100,14 @@ map_fields(open_csv('languages'), language_fields)) # convert longitude and latitude to float from string - for l in self.languages.values(): + for l in list(self.languages.values()): l['latitude'] = float(l['latitude']) l['longitude'] = float(l['longitude']) # The datapoints file is more complicated. There is a column for # every feature, and a row for every language. Each cell is either # empty or contains a value dependent on the feature. rows = open_csv('datapoints', remove_header=False) - header = rows.next() + header = next(rows) self.data = defaultdict(dict) self.feat_lg_map = defaultdict(list) for row in rows: @@ -124,7 +124,7 @@ self.iso_index = defaultdict(list) self.language_name_index = defaultdict(list) - for lg in self.languages.values(): + for lg in list(self.languages.values()): for iso in lg['iso_codes'].split(): self.iso_index[iso] += [lg] name = lg['name'].lower() @@ -141,7 +141,7 @@ # family -> genus # family -> subfamily -> genus lg_hier = {} - for lg in self.languages.values(): + for lg in list(self.languages.values()): family = lg_hier.setdefault(lg['family'], LHNode(lg['family'])) family.languages[lg['wals_code']] = lg @@ -165,12 +165,12 @@ @param wals_code: The WALS code for a language. """ - print self.languages[wals_code]['name'], '(%s):' % wals_code + print(self.languages[wals_code]['name'], '(%s):' % wals_code) data = self.data[wals_code] for feat in sorted(data.keys()): - print ' ', self.features[feat]['name'], '(%s):' % feat,\ + print(' ', self.features[feat]['name'], '(%s):' % feat,\ self.values[feat][data[feat]]['description'],\ - '(%s)' % self.values[feat][data[feat]]['value_id'] + '(%s)' % self.values[feat][data[feat]]['value_id']) def get_wals_codes_from_iso(self, iso_code): """ @@ -217,36 +217,36 @@ def demo(wals_directory=None, dialect='excel-tab', encoding='utf-8'): if not wals_directory: import sys - print >>sys.stderr, 'Error: No WALS data directory provided.' - print >>sys.stderr, ' You may obtain the database from ' +\ - 'http://wals.info/export' + print('Error: No WALS data directory provided.', file=sys.stderr) + print(' You may obtain the database from ' +\ + 'http://wals.info/export', file=sys.stderr) return w = WALS(wals_directory, dialect, encoding) # Basic statistics - print 'In database:\n %d\tlanguages\n %d\tfeatures ' %\ - (len(w.languages), len(w.features)) + print('In database:\n %d\tlanguages\n %d\tfeatures ' %\ + (len(w.languages), len(w.features))) # values are a nested dictionary (w.values[feature_id][value_id]) - num_vals = sum(map(len, w.values.vaRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/wals.py lues())) - print ' %d\ttotal values (%f avg. number per feature)' %\ - (num_vals, float(num_vals)/len(w.features)) + num_vals = sum(map(len, list(w.values.values()))) + print(' %d\ttotal values (%f avg. number per feature)' %\ + (num_vals, float(num_vals)/len(w.features))) # More statistics - print " %d languages specify feature 81A (order of S, O, and V)" %\ - (len(w.get_languages_with_feature('81A'))) - print " %d langauges have VOS order" %\ - (len(w.get_languages_with_feature('81A', value='4'))) + print(" %d languages specify feature 81A (order of S, O, and V)" %\ + (len(w.get_languages_with_feature('81A')))) + print(" %d langauges have VOS order" %\ + (len(w.get_languages_with_feature('81A', value='4')))) # Getting language data - print "\nGetting data for languages named 'Irish'" + print("\nGetting data for languages named 'Irish'") for wals_code in w.get_wals_codes_from_name('Irish'): l = w.languages[wals_code] - print ' %s (ISO-639 code: %s WALS code: %s)' %\ - (l['name'], l['iso_codes'], wals_code) - print "\nGetting data for languages with ISO 'isl'" + print(' %s (ISO-639 code: %s WALS code: %s)' %\ + (l['name'], l['iso_codes'], wals_code)) + print("\nGetting data for languages with ISO 'isl'") for wals_code in w.get_wals_codes_from_iso('isl'): w.show_language(wals_code) - print "\nLocations of dialects for the Min Nan language (ISO 'nan'):" + print("\nLocations of dialects for the Min Nan language (ISO 'nan'):") for wals_code in w.get_wals_codes_from_iso('nan'): l = w.languages[wals_code] - print " %s\tlat:%f\tlong:%f" %\ - (l['name'], l['latitude'], l['longitude']) + print(" %s\tlat:%f\tlong:%f" %\ + (l['name'], l['latitude'], l['longitude'])) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/utilities.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/utilities.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/utilities.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/utilities.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/utilities.py (refactored) @@ -169,7 +169,7 @@ return dict def items(self): - return zip(self._keys, self.values()) + return list(zip(self._keys, list(self.values()))) def keys(self): return self._keys @@ -191,9 +191,9 @@ def update(self, dict): UserDict.update(self, dict) - for key in dict.keys(): + for key in list(dict.keys()): if key not in self._keys: self._keys.append(key) def values(self): - return map(self.get, self._keys) + return list(map(self.get, self._keys)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/text.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/text.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/text.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/text.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/text.py (refactored) @@ -10,7 +10,7 @@ """ import re -from utilities import Field, SequentialDictionary +from .utilities import Field, SequentialDictionary from nltk.corpus.reader.toolbox import StandardFormat @@ -185,7 +185,7 @@ def get_field_markers(self): """Obtain list of unique fields for the line.""" - return self._fields.keys() + return list(self._fields.keys()) def get_field_as_string(self, field_marker, @@ -224,7 +224,7 @@ def get_field_values(self): """Obtain list of field values for the line.""" - return self._fields.values() + return list(self._fields.values()) def get_label(self): """Obtain identifier for line.""" @@ -246,7 +246,7 @@ """Obtain a list of morpheme objects for the line.""" morphemes = [] indices = get_indices(self.getFieldValueByFieldMarker("m")) - print "%s" % indices + print("%s" % indices) morphemeFormField = self.getFieldValueByFieldMarker("m") morphemeGlossField = self.getFieldValueByFieldMarker("g") morphemeFormSlices = get_slices_by_indices(morphemeFormField, indices) @@ -519,7 +519,7 @@ """This method finds the indices for the leftmost boundaries of the units in a line of aligned text. - Given the field \um, this function will find the + Given the field \\um, this function will find the indices identifing leftmost word boundaries, as follows:: @@ -527,7 +527,7 @@ | | | | ||||||||||||||||||||||||||| \sf dit is een goede <- surface form - \um dit is een goed -e <- underlying morphemes + \\um dit is een goed -e <- underlying morphemes \mg this is a good -ADJ <- morpheme gloss \gc DEM V ART ADJECTIVE -SUFF <- grammatical categories \ft This is a good explanation. <- free translation + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/settings.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/settings.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/settings.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/settings.py (refactored) @@ -45,7 +45,7 @@ """Obtain a list of all of the field markers for the marker set. @returns: list of field markers @rtype: list of strings""" - return self._dict.keys() + return list(self._dict.keys()) def add_field_metadata(self, fmeta) : """Add FieldMetadata object to dictionary of marker sets, keyed by field marker. @@ -366,54 +366,54 @@ else : pass - print "----- Interlinear Process -----" - print " FROM: [%s]" % ip.get_from_marker() - print " TO: [%s]" % ip.get_to_marker() - print " GLOSS SEP: [%s]" % ip.get_gloss_separator() - print " FAIL MARK: [%s]" % ip.get_failure_marker() - print " SHOW FAIL MARK: [%s]" % ip.show_failure_marker() - print " SHOW ROOT GUESS: [%s]" % ip.show_root_guess() - print " PARSE PROCESS: [%s]" % ip.is_parse_process() + print("----- Interlinear Process -----") + print(" FROM: [%s]" % ip.get_from_marker()) + print(" TO: [%s]" % ip.get_to_marker()) + print(" GLOSS SEP: [%s]" % ip.get_gloss_separator()) + print(" FAIL MARK: [%s]" % ip.get_failure_marker()) + print(" SHOW FAIL MARK: [%s]" % ip.show_failure_marker()) + print(" SHOW ROOT GUESS: [%s]" % ip.show_root_guess()) + print(" PARSE PROCESS: [%s]" % ip.is_parse_process()) trilook = proc.find("triLook") if trilook : - print " -- trilook --" - print " DB TYPE: [%s]" % self.__parse_value(trilook, "dbtyp") - print " MKR OUTPUT: [%s]" % self.__parse_value(trilook, "mkrOut") + print(" -- trilook --") + print(" DB TYPE: [%s]" % self.__parse_value(trilook, "dbtyp")) + print(" MKR OUTPUT: [%s]" % self.__parse_value(trilook, "mkrOut")) tripref = proc.find("triPref") if tripref : - print " -- tripref --" - print " DB TYPE: [%s]" % self.__parse_value(tripref, "dbtyp") - print " MKR OUTPUT: [%s]" % self.__parse_value(tripref, "mkrOut") + print(" -- tripref --") + print(" DB TYPE: [%s]" % self.__parse_value(tripref, "dbtyp")) + print(" MKR OUTPUT: [%s]" % self.__parse_value(tripref, "mkrOut")) try : for d in tripref.findall("drflst/drf") : - print " DB: [%s]" % self.__parse_value(d, "File") + print(" DB: [%s]" % self.__parse_value(d, "File")) except : pass try : for d in tripref.find("mrflst") : - print " MKR: [%s]" % d.text + print(" MKR: [%s]" % d.text) except : pass triroot = proc.find("triRoot") if triroot : - print " -- triroot --" - print " DB TYPE: [%s]" % self.__parse_value(triroot, "dbtyp") - print " MKR OUTPUT: [%s]" % self.__parse_value(triroot, "mkrOut") + print(" -- triroot --") + print(" DB TYPE: [%s]" % self.__parse_value(triroot, "dbtyp")) + print(" MKR OUTPUT: [%s]" % self.__parse_value(triroot, "mkrOut")) try : for d in triroot.findall("drflst/drf") : - print " DB: [%s]" % self.__parse_value(d, "File") + print(" DB: [%s]" % selRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/settings.py f.__parse_value(d, "File")) except : pass try : for d in triroot.find("mrflst") : - print " MKR: [%s]" % d.text + print(" MKR: [%s]" % d.text) except : pass - print "" + print("") # Handle metadata for field markers (aka, marker set) for mkr in self._tree.findall('mkrset/mkr') : + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/normalise.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/normalise.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/normalise.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/normalise.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/normalise.py (refactored) @@ -31,7 +31,7 @@ add_default_fields(lexicon, hierarchy.default_fields) sort_fields(lexicon, hierarchy.field_order) add_blank_lines(lexicon, hierarchy.blanks_before, hierarchy.blanks_between) - print to_sfm_string(lexicon, encoding='utf8') + print(to_sfm_string(lexicon, encoding='utf8')) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/lexicon.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/lexicon.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/lexicon.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/lexicon.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/lexicon.py (refactored) @@ -13,7 +13,7 @@ import os, re, sys import nltk.data from nltk.corpus.reader.toolbox import StandardFormat -from utilities import Field, SequentialDictionary +from .utilities import Field, SequentialDictionary class Lexicon(StandardFormat): @@ -67,7 +67,7 @@ @return: all of the entries in the Lexicon @rtype: list of Entry objects """ - keys = self._entries.keys() + keys = list(self._entries.keys()) keys.sort() for k in keys : v = self._entries[k] @@ -95,10 +95,10 @@ # Should this throw an error if a field with no values # is used in the list of key fields? pass - if self._entries.has_key(key) : + if key in self._entries : if unique : msg = "Non-unique entry! \nEntry: \n%s\nKey Fields: %s\nKey: '%s'\n" % (entry, self._key_fields, key) - raise ValueError, msg + raise ValueError(msg) else : self._entries[key] = [] # Now append entry to list of entries for key @@ -195,7 +195,7 @@ """ s = "" fields = self.get_fields() - for fm, fvs in self._fields.items(): + for fm, fvs in list(self._fields.items()): for fv in fvs: s = s + "\n\\%s %s" % (fm, fv) return s @@ -264,7 +264,7 @@ @rtype: list of Field objects """ - return self._fields.values() + return list(self._fields.values()) def get_field_markers(self): """ @@ -273,7 +273,7 @@ @return: the field markers of an entry @rtype: list """ - return self._fields.keys() + return list(self._fields.keys()) def get_values_by_marker(self, field_marker, sep=None) : return self.get_field_values_by_field_marker(field_marker, sep) @@ -364,7 +364,7 @@ @param value : field value @type value : string """ - if self._fields.has_key(marker): + if marker in self._fields: fvs = self._fields[marker] fvs.append(value) else: @@ -381,7 +381,7 @@ @param fieldMarker: field marker to be deleted @type fieldMarker: string """ - if self._fields.has_key(fieldMarker): + if fieldMarker in self._fields: del self._fields[fieldMarker] def demo() : @@ -390,9 +390,9 @@ l.parse(key_fields=['lx','ps','sn'], unique_entry=False) h = l.get_header() for e in l.get_entries() : - print "<%s><%s><%s>" % (e.get_field_as_string("lx", ""), + print("<%s><%s><%s>" % (e.get_field_as_string("lx", ""), e.get_field_as_string("ps", ""), - e.get_field_as_string("sn", "")) + e.get_field_as_string("sn", ""))) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/language.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/language.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/language.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/language.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/language.py (refactored) @@ -44,7 +44,7 @@ for c in case_pairs.splitlines(): val = c.split() if len(val) != 2: - raise ValueError, '"%s" is not a valid case association' % c + raise ValueError('"%s" is not a valid case association' % c) u, l = val let_u = case[u] = Letter() let_l = case[l] = Letter() @@ -111,20 +111,20 @@ j = 1 for m in p: if m in graphs: - raise ValueError, 'primary "%s" already in sort order' % m + raise ValueError('primary "%s" already in sort order' % m) graphs[m] = g = Graph() g.type = 'p' g.order = (i, j, unmarked) j += 1 i += 1 - prims = graphs.keys() + prims = list(graphs.keys()) prims.remove(' ') self.letter_pat = self.make_pattern(prims) i = 1 for s in sec_pre: if s in graphs: - raise ValueError, 'secondary preceding "%s" already in sort order' % s + raise ValueError('secondary preceding "%s" already in sort order' % s) graphs[s] = g = Graph() g.type = 's' g.order = i @@ -134,13 +134,13 @@ i += 1 for s in sec_fol: if s in graphs: - raise ValueError, 'secondary following "%s" already in sort order' % s + raise ValueError('secondary following "%s" already in sort order' % s) graphs[s] = g = Graph() g.type = 's' g.order = i i += 1 - self.graph_pat = self.make_pattern(graphs.keys()) + self.graph_pat = self.make_pattern(list(graphs.keys())) ##~ graph_list = graphs.keys() ##~ ##~ # sort the longest first @@ -167,7 +167,7 @@ if match is not None: return match.group() else: - raise ValueError, 'no primary found in "%s"' % s + raise ValueError('no primary found in "%s"' % s) def transform(self, s): graphs = self.graphs + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/iu_mien_hier.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/toolbox/iu_mien_hier.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/iu_mien_hier.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/etreelib.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/etreelib.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/etreelib.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/etreelib.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/etreelib.py (refactored) @@ -20,7 +20,7 @@ for item in children: if isinstance(item, dict): elem.attrib.update(item) - elif isinstance(item, basestring): + elif isinstance(item, str): if len(elem): elem[-1].tail = (elem[-1].tail or "") + item else: @@ -47,7 +47,7 @@ @type item: string or ElementTree.Element @param item: string or element appended to elem """ - if isinstance(item, basestring): + if isinstance(item, str): if len(elem): elem[-1].tail = (elem[-1].tail or "") + item else: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/errors.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/toolbox/errors.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/errors.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo4.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo4.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo4.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo3.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo3.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo3.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo3.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo3.py (refactored) @@ -42,9 +42,9 @@ num_senses += 1 for example in sense.findall('example'): num_examples += 1 -print 'num. lexemes =', num_lexemes -print 'num. senses =', num_senses -print 'num. examples =', num_examples +print('num. lexemes =', num_lexemes) +print('num. senses =', num_senses) +print('num. examples =', num_examples) #another approach -print 'num. examples =', len(lexicon.findall('.//example')) +print('num. examples =', len(lexicon.findall('.//example'))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo2.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo2.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo2.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo2.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo2.py (refactored) @@ -38,6 +38,6 @@ return (s) for field in lexicon[50].getchildren(): - print "\\%s %s" % (field.tag, field.text) + print("\\%s %s" % (field.tag, field.text)) if field.tag == "lx": - print "\\cv %s" % cv(field.text) + print("\\cv %s" % cv(field.text)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo1.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo1.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo1.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo1.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/demos/demo1.py (refactored) @@ -21,7 +21,7 @@ for entry in lexicon.findall('record'): num_entries += 1 sum_size += len(entry) -print sum_size/num_entries +print(sum_size/num_entries) from nltk.etree.ElementTree import ElementTree + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/demos/analyse_toolbox.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/demos/analyse_toolbox.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/demos/analyse_toolbox.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/demos/analyse_toolbox.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/demos/analyse_toolbox.py (refactored) @@ -66,7 +66,7 @@ def pattern_count(patt_dict): n = 0 - for value in patt_dict.values(): + for value in list(patt_dict.values()): n += len(value) return n @@ -96,25 +96,25 @@ out_file.write(ET.tostring(lexicon, encoding='UTF-8')) out_file.close() - print 'analysing files\n%s\n' % '\n'.join(dict_names) + print('analysing files\n%s\n' % '\n'.join(dict_names)) if xml: - print 'XML lexicon output in file "%s"\n' % xml - print '====chunk grammar====' - print gram - print '\n' + print('XML lexicon output in file "%s"\n' % xml) + print('====chunk grammar====') + print(gram) + print('\n') max_positions = 30 - for structure, patt_dict in analysis.items(): - print '\n\n===%s===: total= %d' %(structure, pattern_count(patt_dict)) - for pattern, positions in sorted(patt_dict.items(), key=lambda t: (-len(t[1]), t[0])): + for structure, patt_dict in list(analysis.items()): + print('\n\n===%s===: total= %d' %(structure, pattern_count(patt_dict))) + for pattern, positions in sorted(list(patt_dict.items()), key=lambda t: (-len(t[1]), t[0])): if len(positions) <= max_positions: pos_str = 'Entries: %s' % ', '.join(positions) else: pos_str = 'Too many entries to list.' - print "\t%5d: %s %s" % (len(positions), ':'.join(pattern), pos_str) - print "\n\n" - print 'mkr\tcount\tnonblank' + print("\t%5d: %s %s" % (len(positions), ':'.join(pattern), pos_str)) + print("\n\n") + print('mkr\tcount\tnonblank') for mkr in mkr_counts: - print '%s\t%5d\t%5d' % (mkr, mkr_counts.get(mkr, 0), nonblank_mkr_counts.get(mkr, 0)) + print('%s\t%5d\t%5d' % (mkr, mkr_counts.get(mkr, 0), nonblank_mkr_counts.get(mkr, 0))) if __name__ == "__main__": + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/data.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/data.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/data.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/data.py (refactored) @@ -26,11 +26,11 @@ first = dict() gram = dict() - for sym, value in grammar.items(): + for sym, value in list(grammar.items()): first[sym] = value[0] gram[sym] = value[0] + value[1] parse_table = dict() - for state in gram.keys(): + for state in list(gram.keys()): parse_table[state] = dict() for to_sym in gram[state]: if to_sym in grammar: @@ -96,7 +96,7 @@ field_iter = self.fields(**kwargs) loop = True try: - mkr, value = field_iter.next() + mkr, value = next(field_iter) except StopIteration: loop = False while loop: @@ -117,8 +117,7 @@ builder.end(state) pstack.pop() else: - raise ValueError, \ - 'Line %d: syntax error, unexpected marker %s.' % (self.line_num, mkr) + raise ValueError('Line %d: syntax error, unexpected marker %s.' % (self.line_num, mkr)) else: # start of terminal marker add = True @@ -128,7 +127,7 @@ builder.data(value) builder.end(mkr) try: - mkr, value = field_iter.next() + mkr, value = next(field_iter) except StopIteration: loop = False else: @@ -141,8 +140,7 @@ builder.end(state) pstack.pop() else: - raise ValueError, \ - 'Line %d: syntax error, unexpected marker %s.' % (self.line_num, mkr) + raise ValueError('Line %d: syntax error, unexpected marker %s.' % (self.line_num, mkr)) for state, first_elems in reversed(pstack): builder.end(state) return builder.close() @@ -161,7 +159,7 @@ @rtype: C{ElementTree._ElementInterface} @return: Contents of toolbox data parsed according to rules in grammar return parses of all the dictionary files""" - if isinstance(file_names, types.StringTypes): + if isinstance(file_names, (str,)): file_names = (file_names, ) db = toolbox.ToolboxData() all_data = data_header = None @@ -170,7 +168,7 @@ logging.info('about to parse %s' % fname) try: cur_data = db.parse(grammar, **kwargs) - except ValueError, msg: + except ValueError as msg: logging.error('%s: %s' % (fname, msg)) db.close() continue @@ -178,7 +176,7 @@ if all_data is not None: header = cur_data.find('header') if header != data_header: - raise ValueError, "cannot combine databases with different types" + raise ValueError("cannot combine databases with different types") for elem in cur_data.findall('record'): all_data.append(elem) else: @@ -283,12 +281,12 @@ _months = _init_months() fields = s.split('/') if len(fields) != 3: - raise ValueError, 'Invalid Toolbox date "%s"' % s + raise ValueError('Invalid Toolbox date "%s"' % s) day = int(fields[0]) try: month = _months[fields[1]] except KeyError: - raise ValueError, 'Invalid Toolbox date "%s"' % s + raise ValueError('Invalid Toolbox date "%s"' % s) year = int(fields[2]) return date(year, month, day) @@ -334,7 +332,7 @@ 'hm': {'class': 'hm'}, # homonym } -char_codes = '|'.join(sorted(char_code_attribs.keys(), key=lambda s: (-len(s), s))) +char_codes = '|'.join(sorted(list(char_coRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/data.py de_attribs.keys()), key=lambda s: (-len(s), s))) word_pat = re.compile(r'(%s):([^\s:;,.?!(){}\[\]]+)' % char_codes) bar_pat = re.compile(r'\|(%s)\{([^}]*)(?:\}|$)' % char_codes) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/toolbox/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/toolbox/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/toolbox/__init__.py --- ../python3/nltk_contrib/nltk_contrib/toolbox/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/toolbox/__init__.py (refactored) @@ -1,3 +1,3 @@ # __all__ = ["data", "etreelib", "errors", "lexicon", "settings", "text", "utilities"] -from data import * +from .data import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/timex.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/timex.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/timex.py --- ../python3/nltk_contrib/nltk_contrib/timex.py (original) +++ ../python3/nltk_contrib/nltk_contrib/timex.py (refactored) @@ -11,9 +11,9 @@ try: from mx.DateTime import * except ImportError: - print """ + print(""" Requires eGenix.com mx Base Distribution -http://www.egenix.com/products/python/mxBase/""" +http://www.egenix.com/products/python/mxBase/""") # Predefined strings. numbers = "(^a(?=\s)|one|two|three|four|five|six|seven|eight|nine|ten| \ @@ -173,8 +173,7 @@ # Find all identified timex and put them into a list timex_regex = re.compile(r'.*?', re.DOTALL) timex_found = timex_regex.findall(tagged_text) - timex_found = map(lambda timex:re.sub(r'', '', timex), \ - timex_found) + timex_found = [re.sub(r'', '', timex) for timex in timex_found] # Calculate the new date accordingly for timex in timex_found: @@ -189,9 +188,9 @@ timex, re.IGNORECASE) value = split_timex[0] unit = split_timex[1] - num_list = map(lambda s:hashnum(s),re.findall(numbers + '+', \ - value, re.IGNORECASE)) - timex = `sum(num_list)` + ' ' + unit + num_list = [hashnum(s) for s in re.findall(numbers + '+', \ + value, re.IGNORECASE)] + timex = repr(sum(num_list)) + ' ' + unit # If timex matches ISO format, remove 'time' and reorder 'date' if re.match(r'\d+[/-]\d+[/-]\d+ \d+:\d+:\d+\.\d+', timex): @@ -351,7 +350,7 @@ def demo(): import nltk text = nltk.corpus.abc.raw('rural.txt')[:10000] - print tag(text) + print(tag(text)) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/utils/parallel.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/tiger/utils/parallel.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/utils/parallel.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/utils/factory.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/utils/factory.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/utils/factory.py --- ../python3/nltk_contrib/nltk_contrib/tiger/utils/factory.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/utils/factory.py (refactored) @@ -21,12 +21,12 @@ switch_value = self._get_switch(*args) try: cls = self._switch[switch_value] - except KeyError, e: + except KeyError as e: self.raise_error(e.args[0]) return self._create_instance(cls, *args) def raise_error(self, switch_name): - raise MissingClassException, switch_name + raise MissingClassException(switch_name) def _create_instance(self, cls, *args): return cls(*args) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/utils/etree_xml.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/utils/etree_xml.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/utils/etree_xml.py --- ../python3/nltk_contrib/nltk_contrib/tiger/utils/etree_xml.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/utils/etree_xml.py (refactored) @@ -3,6 +3,7 @@ # Author: Torsten Marek # Licensed under the GNU GPLv2 import logging +import collections __all__ = ("element_handler", "IterParseHandler", "ET") @@ -37,17 +38,15 @@ class IterParseType(type): def __new__(mcs, classname, bases, class_dict): class_dict["__x_handlers__"] = handlers = {} - for attr in class_dict.itervalues(): - if callable(attr) and hasattr(attr, HANDLER_ATTRIBUTE_NAME): + for attr in class_dict.values(): + if isinstance(attr, collections.Callable) and hasattr(attr, HANDLER_ATTRIBUTE_NAME): handlers[getattr(attr, HANDLER_ATTRIBUTE_NAME)] = attr return type.__new__(mcs, classname, bases, class_dict) -class IterParseHandler(object): +class IterParseHandler(object, metaclass=IterParseType): DELETE_BRANCH = True - - __metaclass__ = IterParseType __x_handlers__ = {} @@ -75,7 +74,7 @@ context = iter(event_source) - event, root = context.next() + event, root = next(context) self._handle_root(root) for event, elem in context: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/utils/enum.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/utils/enum.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/utils/enum.py --- ../python3/nltk_contrib/nltk_contrib/tiger/utils/enum.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/utils/enum.py (refactored) @@ -25,7 +25,7 @@ dct["__slots__"] = ("__value__", "__name__") dct["__members__"] = members = {} - for name, obj in dct.iteritems(): + for name, obj in dct.items(): if isinstance(obj, _EnumMember): members[name] = obj.fields obj._name = name @@ -44,10 +44,10 @@ field_count = len(type.__getattribute__(mcs, "__fields__")) - for member_name, field_values in decl.iteritems(): + for member_name, field_values in decl.items(): members[member_name] = mcs() if len(field_values) != field_count: - raise TypeError, ( + raise TypeError( "Wrong number of fields for enum member '%s'. Expected %i, got %i instead." % ( member_name, field_count, len(field_values))) else: @@ -57,19 +57,19 @@ type.__setattr__(mcs, "__members__", members) def __setattr__(mcs, name, value): - raise TypeError, "enum types cannot be modified" + raise TypeError("enum types cannot be modified") def __delattr__(mcs, name): - raise TypeError, "enum types cannot be modified" + raise TypeError("enum types cannot be modified") def names(mcs): - return type.__getattribute__(mcs, '__members__').keys() + return list(type.__getattribute__(mcs, '__members__').keys()) def __len__(mcs): return len(type.__getattribute__(mcs, '__members__')) def __iter__(mcs): - return type.__getattribute__(mcs, '__members__').itervalues() + return iter(type.__getattribute__(mcs, '__members__').values()) def __contains__(mcs, name): return name in type.__getattribute__(mcs, '__members__') @@ -78,10 +78,7 @@ return "" % (mcs.__module__, mcs.__name__) -class Enum(object): - __metaclass__ = _EnumType - - # assigned by metaclass +class Enum(object, metaclass=_EnumType): __value__ = () __name__ = "" + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/utils/db.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/tiger/utils/db.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/utils/db.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/utils/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/tigerxml.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/tigerxml.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/tigerxml.py --- ../python3/nltk_contrib/nltk_contrib/tiger/tigerxml.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/tigerxml.py (refactored) @@ -119,7 +119,7 @@ """ for node_elem in node_elems: node = node_cls(node_elem.get("id")) - for feature_name, value in node_elem.attrib.iteritems(): + for feature_name, value in node_elem.attrib.items(): if feature_name == "id": continue node.features[feature_name] = value + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/tsqlparser.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/tsqlparser.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/tsqlparser.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/tsqlparser.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/tsqlparser.py (refactored) @@ -71,13 +71,13 @@ and can have parentheses for grouping. """ ops = [ - (suppressed_literal(u"!"), 1, pyparsing.opAssoc.RIGHT, + (suppressed_literal("!"), 1, pyparsing.opAssoc.RIGHT, lambda s, l, t: ast.Negation(t[0][0])), - (suppressed_literal(u"&"), 2, pyparsing.opAssoc.LEFT, + (suppressed_literal("&"), 2, pyparsing.opAssoc.LEFT, lambda s, l, t: ast.Conjunction(t.asList()[0])), - (suppressed_literal(u"|"), 2, pyparsing.opAssoc.LEFT, + (suppressed_literal("|"), 2, pyparsing.opAssoc.LEFT, lambda s, l, t: ast.Disjunction(t.asList()[0]))] return pyparsing.operatorPrecedence(atom, ops) @@ -145,7 +145,7 @@ """ assert all(len(pfx) == 1 for pfx in type_prefixes), "prefix list may only contain characters" - v_expr = pyparsing.Combine(pyparsing.oneOf(type_prefixes.keys()) + + v_expr = pyparsing.Combine(pyparsing.oneOf(list(type_prefixes.keys())) + pyparsing.Word(pyparsing.alphanums + "_")).setResultsName("varname") v_expr.type_map = type_prefixes return v_expr @@ -175,7 +175,7 @@ :Named Results: - `expr`: the right-hand side of the definition """ - definition = (variable_expr + suppressed_literal(u":") + + definition = (variable_expr + suppressed_literal(":") + right_hand.setResultsName("expr")) return definition.setParseAction( lambda s, l, t: ast.VariableDefinition( @@ -225,7 +225,7 @@ :AST Node: `FeatureConstraint` :Example: ``cat="NP"``, ``pos!=/N+/``, ``word="safe" & pos="NN"`` """ - op = pyparsing.oneOf(u"= !=") + op = pyparsing.oneOf("= !=") v = FEATURE_VALUE constraint = (WORD + op + v) @@ -245,7 +245,7 @@ :AST Node: `NodeDescription` :Example: ``[pos="PREP" & word=("vor"|"vorm")]``, ``[T]``, ``[#a:(word = "safe")]``, ``[#b]`` """ - node_desc = surround(u"[", FEATURE_CONSTRAINT, u"]") + node_desc = surround("[", FEATURE_CONSTRAINT, "]") return node_desc.setParseAction(single_value_holder(ast.NodeDescription)) NODE_DESCRIPTION = node_description() @@ -423,7 +423,7 @@ arg = (NODE_OPERAND | integer_literal()).setResultsName("args", listAllMatches = True) identifier = WORD("pred") - return (identifier + surround(u"(", pyparsing.delimitedList(arg), u")") + return (identifier + surround("(", pyparsing.delimitedList(arg), ")") ).setParseAction(lambda s, l, t: ast.Predicate(t.pred, t.args.asList())) @@ -444,7 +444,7 @@ """ atom = (node_predicate() | node_relation_constraint() | NODE_OPERAND) - expr = pyparsing.Group(atom + pyparsing.OneOrMore(suppressed_literal(u"&") + atom) + expr = pyparsing.Group(atom + pyparsing.OneOrMore(suppressed_literal("&") + atom) ).setParseAction(lambda s, l, t: ast.Conjunction(t.asList()[0])) | atom expr.setParseAction(single_value_holder(ast.TsqlExpression)) @@ -463,5 +463,5 @@ """ try: return self._g.parseString(query_string)[0] - except pyparsing.ParseException, e: - raise TigerSyntaxError, e + except pyparsing.ParseException as e: + raise TigerSyntaxError(e) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/result.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/result.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/result.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/result.py (refactored) @@ -13,7 +13,7 @@ import operator import multiprocessing from functools import partial -from itertools import count, izip +from itertools import count from collections import defaultdict from nltk_contrib.tiger.index import IndexNodeId @@ -43,27 +43,27 @@ def partition_variables(variables, constraints): - var_connections = dict(izip(variables, count())) + var_connections = dict(zip(variables, count())) for l, r in constraints: new_id = var_connections[l] old_id = var_connections[r] - for name, value in var_connections.iteritems(): + for name, value in var_connections.items(): if value == old_id: var_connections[name] = new_id sets = defaultdict(set) - for name, value in var_connections.iteritems(): + for name, value in var_connections.items(): sets[value].add(name) - return sets.values() + return list(sets.values()) class ConstraintChecker(object): @classmethod def _nodevar_idx_combinations(cls, ordered_node_vars): return [(upper_key, lower_key) - for lower_key in xrange(1, len(ordered_node_vars)) - for upper_key in xrange(lower_key)] + for lower_key in range(1, len(ordered_node_vars)) + for upper_key in range(lower_key)] @classmethod def _get_node_variables(cls, constraints): @@ -160,7 +160,7 @@ of the module. """ if self.has_results: - g = [item for item in self.nodes.items() if not item[0].is_set] + g = [item for item in list(self.nodes.items()) if not item[0].is_set] return [self._nodeids(query_result) for query_result in named_cross_product(g) @@ -187,7 +187,7 @@ if query_context.checked_graphs == PREPARE_NEW_AFTER: query_context.checker_factory = ConstraintChecker.prepare(query_context.constraints, query_context.node_counts) elif query_context.checked_graphs < PREPARE_NEW_AFTER: - for node_var, node_ids in graph_results.iteritems(): + for node_var, node_ids in graph_results.items(): query_context.node_counts[node_var] += len(node_ids) c = query_context.checker_factory(graph_results, query_context) @@ -198,7 +198,7 @@ def __init__(self, nodes, query_context): query_context.checked_graphs += 1 self._nodes = [(node_var.name, [IndexNodeId.from_int(nid) for nid in node_ids]) - for node_var, node_ids in nodes.iteritems() + for node_var, node_ids in nodes.items() if not node_var.is_set] self._size = product((len(ids) for var, ids in self._nodes), 1) @@ -230,7 +230,7 @@ self.checker_factory = ConstraintChecker.prepare(constraints) self.constraint_checker = cct_search else: - raise MissingFeatureError, "Missing feature: disjoint constraint sets. Please file a bug report." + raise MissingFeatureError("Missing feature: disjoint constraint sets. Please file a bug report.") self._reset_stats() def _reset_stats(self): @@ -264,7 +264,7 @@ class ResultBuilder(QueryContext, ResultBuilderBase): def __init__(self, ev_context, node_descriptions, predicates, constraints): - QueryContext.__init__(self, ev_context.db, constraints, node_descriptions.keys()) + QueryContext.__init__(self, ev_context.db, constraints, list(node_descriptions.keys())) ResultBuilderBase.__init__(self, node_descriptions, predicates) self._nodesearcher = ev_context.nodesearcher @@ -277,9 +277,9 @@ self._reset_stats() matching_graphs = self._nodesearcher.search_nodes(self._nodes, self._predicates) - return filter(operator.itemgetter(1), + return list(filter(operator.itemgetter(RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/result.py 1), ((graph_id, self.constraint_checker(nodes, self)) - for graph_id, nodes in matching_graphs)) + for graph_id, nodes in matching_graphs))) class ParallelEvaluatorContext(object): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/querybuilder.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/tiger/query/querybuilder.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/querybuilder.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/predicates.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/predicates.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/predicates.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/predicates.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/predicates.py (refactored) @@ -72,17 +72,17 @@ expected_ast_type, mandatory = formal_arg try: if invocation_args[idx].TYPE is not expected_ast_type: - raise PredicateTypeError, ( + raise PredicateTypeError( "Type Error in argument %i of '%s': Expected %s, got %s" % \ (idx, name, invocation_args[idx].TYPE, expected_ast_type)) except IndexError: if mandatory: - raise PredicateTypeError, "Missing arguments for '%s'." % (name, ) + raise PredicateTypeError("Missing arguments for '%s'." % (name, )) else: break else: if idx + 1 != len(invocation_args): - raise PredicateTypeError, "Too many arguments for predicate '%s'." % (name, ) + raise PredicateTypeError("Too many arguments for predicate '%s'." % (name, )) variable = invocation_args[0].variable if variable.container not in cls.__ref_types__: raise PredicateTypeError("Predicate '%s' not valid for container type '%s'." % ( @@ -273,7 +273,7 @@ def raise_error(self, pred_name): """Raises an `UndefinedNameError` for unknown predicates.""" - raise UndefinedNameError, (UndefinedNameError.PREDICATE, pred_name) + raise UndefinedNameError(UndefinedNameError.PREDICATE, pred_name) def _create_instance(self, cls, pred_ast): """Creates a new predicate using the class factory method.""" + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/nodesearcher.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/nodesearcher.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/nodesearcher.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/nodesearcher.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/nodesearcher.py (refactored) @@ -83,10 +83,10 @@ self._length = length def __iter__(self): - return ((0, graph_id) for graph_id in xrange(self._length)) - - - get_temp_table = ("_temp_regex_table_%i_%i" % (os.getpid(), c) for c in count()).next + return ((0, graph_id) for graph_id in range(self._length)) + + + get_temp_table = ("_temp_regex_table_%i_%i" % (os.getpid(), c) for c in count()).__next__ MATCH = True NO_MATCH = False @@ -146,8 +146,7 @@ for literal in literals: if literal.TYPE is ast.StringLiteral: if has_explicit_match: - raise ConflictError, \ - "Feature '%s' has two conflicting constraints." % (feature_name, ) + raise ConflictError("Feature '%s' has two conflicting constraints." % (feature_name, )) has_explicit_match = True @@ -169,7 +168,7 @@ def cleanup_temporary_tables(self): """Drops all temporary tables created for regex matches.""" - for table_name in self._temp_tables.itervalues(): + for table_name in self._temp_tables.values(): self._db.execute("DROP TABLE %s" % (table_name, )) self._temp_tables = {} @@ -333,7 +332,7 @@ def _get_set_predicates(self): """Returns a list `(var_name, predicate)` containing all constraints on node sets.""" return [(name, pred) - for name, node_predicates in self._predicates.iteritems() + for name, node_predicates in self._predicates.items() if name.is_set for pred in node_predicates if not pred.FOR_NODE] @@ -371,7 +370,7 @@ def _read_tips(self): for node_variable, node_iter, tips in self._node_cursors: try: - tips[node_variable] = node_iter.next() + tips[node_variable] = next(node_iter) except StopIteration: if self._remove_iter(node_variable, node_iter): raise EmptyResultException @@ -381,14 +380,14 @@ for node_var in self._node_vars: current_graph[node_var] = [] - for target, source in self._shared_variables.iteritems(): + for target, source in self._shared_variables.items(): current_graph[target] = current_graph[source] return current_graph @staticmethod def _dump_nodes(from_iter, current_tip, next_graph): while current_tip[1] < next_graph: - current_tip = from_iter.next() + current_tip = next(from_iter) return current_tip def _find_graphs(self): @@ -411,7 +410,7 @@ while current_tip[1] == min_graphid: node_list.append(current_tip[0]) - current_tip = node_iter.next() + current_tip = next(node_iter) tips[varname] = current_tip if not varname.is_set and current_tip[1] > max_graphid: max_graphid = current_tip[1] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/node_variable.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/node_variable.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/node_variable.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/node_variable.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/node_variable.py (refactored) @@ -55,7 +55,7 @@ elif self.var_type is NodeType.UNKNOWN: self.var_type = new_type else: - raise TigerTypeError, self._name + raise TigerTypeError(self._name) name = property(attrgetter("_name"), doc = "The name of the variable") is_set = property(attrgetter("_is_set"), + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/factory.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/factory.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/factory.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/factory.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/factory.py (refactored) @@ -51,7 +51,7 @@ try: self._types.add(self._type_assoc[node.feature]) except KeyError: - raise UndefinedNameError, (UndefinedNameError.FEATURE, node.feature) + raise UndefinedNameError(UndefinedNameError.FEATURE, node.feature) return self.STOP @ast_visitor.node_handler(ast.FeatureRecord) @@ -72,7 +72,7 @@ for disj in self._disjoints: if len(disj) == 2: - raise TigerTypeError, node_variable.name + raise TigerTypeError(node_variable.name) else: node_var_type.update(disj) @@ -94,7 +94,7 @@ Anonymous node descriptions will be wrapped into a variable definition with an automatically generated, globally unique variable name. """ - get_anon_nodevar = (":anon:%i" % (c, ) for c in count()).next + get_anon_nodevar = (":anon:%i" % (c, ) for c in count()).__next__ constraint_factory = ConstraintFactory() predicate_factory = PredicateFactory() @@ -220,7 +220,7 @@ This mechanism is different from handling of feature records. The type predicate is added to each disjunct, while the feature record can differ between each disjunct. """ - for node_variable, description in self.node_defs.iteritems(): + for node_variable, description in self.node_defs.items(): if description.expression.TYPE is ast.Nop and len(predicates[node_variable]) == 0 \ and node_variable.var_type is not NodeType.UNKNOWN: predicates[node_variable].append(NodeTypePredicate(node_variable.var_type)) @@ -239,7 +239,7 @@ """Processes the collected items and returns the query object.""" predicates = defaultdict(list) - for node_variable, node_desc in self.node_defs.iteritems(): + for node_variable, node_desc in self.node_defs.items(): self.nodedesc_normalizer.run(node_desc) node_var_type, has_frec = self._ntyper.run(node_desc, node_variable) node_variable.refine_type(node_var_type) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/exceptions.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/tiger/query/exceptions.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/exceptions.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/evaluator.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/tiger/query/evaluator.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/evaluator.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/constraints.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/constraints.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/constraints.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/constraints.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/constraints.py (refactored) @@ -7,7 +7,7 @@ The code and the interfaces of this module are still subject to change. Please refer to the inline comments for more information. """ -from __future__ import with_statement + from nltk_contrib.tiger.query.exceptions import UndefinedNameError from nltk_contrib.tiger.graph import NodeType @@ -171,7 +171,7 @@ def guarded(func, exc_type, new_exc_factory, *args, **kwargs): try: return func(*args, **kwargs) - except exc_type, e: + except exc_type as e: raise new_exc_factory(e) from contextlib import contextmanager @@ -180,8 +180,8 @@ def convert_exception(exc_type, new_exc_type, args = lambda exc: exc.args): try: yield - except exc_type, e: - raise new_exc_type, args(e) + except exc_type as e: + raise new_exc_type(args(e)) def _get_label_id(label, dct, domain): with convert_exception(KeyError, UndefinedNameError, lambda exc: (domain, exc.args[0])): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_visitor.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_visitor.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_visitor.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_visitor.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_visitor.py (refactored) @@ -38,7 +38,7 @@ def __new__(mcs, classname, bases, class_dict): switch = {} post_switch = {} - for obj in class_dict.itervalues(): + for obj in class_dict.values(): for n in getattr(obj, "node_types", []): switch[n] = obj @@ -50,7 +50,7 @@ return type.__new__(mcs, classname, bases, class_dict) -class AstVisitor(object): +class AstVisitor(object, metaclass=AstVisitorType): """ The base class for AST visitors. @@ -79,7 +79,6 @@ .. [#] Changing the visitor code to allow this is easy, but the use case has not come up yet. """ - __metaclass__ = AstVisitorType STOP = (0, None) CONTINUE = lambda s, n: (1, n) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_utils.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_utils.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_utils.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_utils.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/ast_utils.py (refactored) @@ -85,8 +85,8 @@ # !(pos="ART") === !(T & pos="ART") === !(T) | (pos != "ART") try: orig_type = self._feature_types[child_node.expression.feature] - except KeyError, e: - raise UndefinedNameError, (UndefinedNameError.FEATURE, e.args[0]) + except KeyError as e: + raise UndefinedNameError(UndefinedNameError.FEATURE, e.args[0]) return self.REPLACE( ast.Disjunction([ ast.FeatureRecord(~orig_type), + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/ast.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/query/ast.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/ast.py --- ../python3/nltk_contrib/nltk_contrib/tiger/query/ast.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/query/ast.py (refactored) @@ -59,7 +59,7 @@ to replace an existing child node via `set_child`. """ from operator import attrgetter -from itertools import izip + from nltk_contrib.tiger.utils.enum import Enum, enum_member # TODO: figure out how to support slots in class hierarchies. Currently, @@ -100,14 +100,14 @@ def __new__(cls, *args, **kwargs): if cls._is_abstract(cls.__name__): - raise TypeError, "cannot instantiate abstract class '%s'." % (cls, ) + raise TypeError("cannot instantiate abstract class '%s'." % (cls, )) else: return object.__new__(cls) def __init__(self, *args): assert len(args) == len(self.__slots__), \ (self.__class__.__name__, args, self.__slots__) - for name, value in izip(self.__slots__, args): + for name, value in zip(self.__slots__, args): setattr(self, name, value) @staticmethod @@ -131,7 +131,7 @@ return False def __repr__(self): - return u"%s(%s)" % (self.__class__.__name__, + return "%s(%s)" % (self.__class__.__name__, ",".join(repr(getattr(self, v)) for v in self.__slots__)) def __iter__(self): @@ -160,7 +160,7 @@ `new_child` must be a subclass of `_Node`. """ if self.is_leaf(): - raise TypeError, "cannot set children on leaf nodes" + raise TypeError("cannot set children on leaf nodes") else: assert isinstance(getattr(self, name_tag), _Node) assert isinstance(new_child, _Node) @@ -252,7 +252,7 @@ _Node.__init__(self, left_operand, right_operand, modifiers) def __repr__(self): - return u"%s(%s, %s, **%s)" % (self.__class__.__name__, + return "%s(%s, %s, **%s)" % (self.__class__.__name__, self.left_operand, self.right_operand, self.modifiers) @classmethod + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/query/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/tiger/query/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/query/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/indexer/tiger_corpus_indexer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/indexer/tiger_corpus_indexer.py --- ../python3/nltk_contrib/nltk_contrib/tiger/indexer/tiger_corpus_indexer.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/indexer/tiger_corpus_indexer.py (refactored) @@ -94,11 +94,11 @@ def _add_index_metadata(self, **kwargs): self._cursor.executemany("INSERT INTO index_metadata (key, value) VALUES (?, ?)", - kwargs.iteritems()) + iter(kwargs.items())) def set_metadata(self, metadata): self._cursor.executemany("INSERT INTO corpus_metadata (key, value) VALUES (?, ?)", - metadata.iteritems()) + iter(metadata.items())) def add_feature(self, feature_name, domain, feature_values): @@ -113,9 +113,9 @@ self._cursor.executemany("INSERT INTO feature_values (feature_id, value_id, value, description) VALUES (?, ?, ?, ?)", ((feature_id, value_map[value], value, description) - for value, description in feature_values.iteritems())) + for value, description in feature_values.items())) else: - value_map = defaultdict(count().next) + value_map = defaultdict(count().__next__) self._open_list_features.append((feature_id, value_map)) self._feature_value_maps[feature_name] = (value_map, domain) @@ -129,13 +129,13 @@ assert DEFAULT_VROOT_EDGE_LABEL in edge_labels, "no neutral edge label" self._cursor.executemany("INSERT INTO edge_labels (id, label, description) VALUES (?, ?, ?)", - ((idx, e[0], e[1]) for idx, e in enumerate(edge_labels.iteritems()))) + ((idx, e[0], e[1]) for idx, e in enumerate(edge_labels.items()))) self._edge_label_map = dict(self._cursor.execute("SELECT label, id FROM edge_labels")) self._serializer.set_edge_label_map(self._edge_label_map) def set_secedge_labels(self, secedge_labels): self._cursor.executemany("INSERT INTO secedge_labels (id, label, description) VALUES (?, ?, ?)", - ((idx, e[0], e[1]) for idx, e in enumerate(secedge_labels.iteritems()))) + ((idx, e[0], e[1]) for idx, e in enumerate(secedge_labels.items()))) self._secedge_label_map = dict(self._cursor.execute("SELECT label, id FROM secedge_labels")) self._serializer.set_secedge_label_map(self._secedge_label_map) @@ -174,7 +174,7 @@ def _index_feature_values(self, graph, node_ids): for node in graph: - for feature_name, feature_value in node.features.iteritems(): + for feature_name, feature_value in node.features.items(): value_map, domain = self._feature_value_maps[feature_name] assert node.TYPE is domain self._insert_lists[feature_name].append((node_ids[node.id].to_int(), value_map[feature_value])) @@ -192,7 +192,7 @@ for label, target_node in node.secedges)) def _flush_node_feature_values(self): - for feature_name, values in self._insert_lists.iteritems(): + for feature_name, values in self._insert_lists.items(): self._cursor.executemany(self._feature_iidx_stmts[feature_name], values) self._insert_lists = defaultdict(list) @@ -203,7 +203,7 @@ graph.id = self._graphs graph.root_id = node_ids[graph.root_id] - for xml_node_id in graph.nodes.keys(): + for xml_node_id in list(graph.nodes.keys()): node = graph.nodes.pop(xml_node_id) node.id = node_ids[node.id] graph.nodes[node_ids[xml_node_id]] = node @@ -215,7 +215,7 @@ def add_graph(self, graph): try: roots = graph.get_roots() - except KeyError, e: + except KeyError as RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/indexer/tiger_corpus_indexer.py e: logging.error("Graph %s is faulty: node %s referenced more than once.", graph.id, e.args[0]) return @@ -239,42 +239,42 @@ (self._graphs, xml_id, buffer(self._serializer.serialize_graph(graph)))) self._graphs += 1 if self._progress and self._graphs % 100 == 0: - print self._graphs + print(self._graphs) def finalize(self, optimize = True): if self._progress: - print "finalize" + print("finalize") self._flush_node_feature_values() if self._progress: - print "inserting feature values" + print("inserting feature values") for feature_id, feature_value_map in self._open_list_features: self._cursor.executemany("INSERT INTO feature_values (feature_id, value_id, value) VALUES (?, ?, ?)", ((feature_id, value_id, value) - for value, value_id in feature_value_map.iteritems())) + for value, value_id in feature_value_map.items())) del self._open_list_features if self._progress: - print "Committing database" + print("Committing database") self._db.commit() self._cursor.execute("CREATE INDEX feature_id_idx ON feature_values (feature_id)") for feature_name in self._feature_value_maps: if self._progress: - print "creating index for feature '%s'" % (feature_name,) + print("creating index for feature '%s'" % (feature_name,)) self._cursor.execute("CREATE INDEX %s_iidx_idx ON feature_iidx_%s (value_id)" % (feature_name, feature_name)) if self._progress: - print "creating index for xml node ids" + print("creating index for xml node ids") self._cursor.execute("CREATE UNIQUE INDEX xml_node_id_idx ON node_data (xml_node_id)") if self._progress: - print "creating index for xml graph ids" + print("creating index for xml graph ids") self._cursor.execute("CREATE UNIQUE INDEX xml_graph_id_idx ON graphs (xml_graph_id)") if self._progress: - print "creating secedge indices" + print("creating secedge indices") self._cursor.execute("CREATE INDEX se_origin_idx ON secedges (origin_id)") self._cursor.execute("CREATE INDEX se_target_idx ON secedges (target_id)") @@ -282,7 +282,7 @@ if optimize: if self._progress: - print "Optimizing database" + print("Optimizing database") self._db.execute("VACUUM") self._add_index_metadata(finished = True) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/indexer/graph_serializer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/indexer/graph_serializer.py WARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/tiger/indexer/graph_serializer.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/indexer/graph_serializer.py --- ../python3/nltk_contrib/nltk_contrib/tiger/indexer/graph_serializer.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/indexer/graph_serializer.py (refactored) @@ -2,8 +2,8 @@ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/indexer/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/index.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/tiger/index.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/index.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/graph.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/graph.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/graph.py --- ../python3/nltk_contrib/nltk_contrib/tiger/graph.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/graph.py (refactored) @@ -26,7 +26,7 @@ elif type_key == "N": return NodeType.NONTERMINAL else: - raise ValueError, "Unknown domain key '%s'." % (type_key,) + raise ValueError("Unknown domain key '%s'." % (type_key,)) def __invert__(self): if self is self.__class__.TERMINAL: @@ -34,7 +34,7 @@ elif self is self.__class__.NONTERMINAL: return self.__class__.TERMINAL else: - raise ValueError, "Cannot invert '%s'." % (self,) + raise ValueError("Cannot invert '%s'." % (self,)) class _TigerNode(object): @@ -121,7 +121,7 @@ self.id = id_ self.edge_label = None self.gorn_address = None - for name, value in kwargs.iteritems(): + for name, value in kwargs.items(): setattr(self, name, value) @@ -154,13 +154,13 @@ return not (self == other) def __iter__(self): - return self.nodes.itervalues() + return iter(self.nodes.values()) def terminals(self): - return (n for n in self.nodes.itervalues() if n.TYPE is NodeType.TERMINAL) + return (n for n in self.nodes.values() if n.TYPE is NodeType.TERMINAL) def nonterminals(self): - return (n for n in self.nodes.itervalues() if n.TYPE is NodeType.NONTERMINAL) + return (n for n in self.nodes.values() if n.TYPE is NodeType.NONTERMINAL) def copy(self): g = TigerGraph(self.id) @@ -189,7 +189,7 @@ self._compute_dominance(terminals, nonterminals) self._compute_corners(nonterminals) - return nonterminals.values(), terminals.values() + return list(nonterminals.values()), list(terminals.values()) def _compute_corners(self, nonterminals): def traverse(node_id, node_data): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/demo.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/demo.py WARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/tiger/demo.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/demo.py --- ../python3/nltk_contrib/nltk_contrib/tiger/demo.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/demo.py (refactored) @@ -2,7 +2,7 @@ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/corpus.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/corpus.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/corpus.py --- ../python3/nltk_contrib/nltk_contrib/tiger/corpus.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/corpus.py (refactored) @@ -87,11 +87,11 @@ self._evaluator = None def _get_edge_label_rmap(self): - return [unicode(r[0]) + return [str(r[0]) for r in self._cursor.execute("SELECT label FROM edge_labels ORDER BY id")] def _get_secedge_label_rmap(self): - return [unicode(r[0]) + return [str(r[0]) for r in self._cursor.execute("SELECT label FROM secedge_labels ORDER BY id")] def _get_domain_features(self, domain): @@ -107,7 +107,7 @@ for r in self._cursor.execute( "SELECT value FROM feature_values WHERE feature_id = ? ORDER BY value_id", (row[0],)): - values.append(unicode(r[0])) + values.append(str(r[0])) return l def __iter__(self): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tiger/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tiger/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tiger/__init__.py --- ../python3/nltk_contrib/nltk_contrib/tiger/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tiger/__init__.py (refactored) @@ -34,7 +34,7 @@ class EmptyDbProvider(object): def connect(self): - raise RuntimeError, "cannot reopen memory-only db" + raise RuntimeError("cannot reopen memory-only db") def can_reconnect(self): return False + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/textgrid.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/textgrid.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/textgrid.py --- ../python3/nltk_contrib/nltk_contrib/textgrid.py (original) +++ ../python3/nltk_contrib/nltk_contrib/textgrid.py (refactored) @@ -150,7 +150,7 @@ for tier in self.tiers: yield tier - def next(self): + def __next__(self): if self.idx == (self.size - 1): raise StopIteration self.idx += 1 @@ -447,23 +447,23 @@ return self.__repr__() + "\n " + "\n ".join(" ".join(row) for row in self.simple_transcript) def demo_TextGrid(demo_data): - print "** Demo of the TextGrid class. **" + print("** Demo of the TextGrid class. **") fid = TextGrid(demo_data) - print "Tiers:", fid.size + print("Tiers:", fid.size) for i, tier in enumerate(fid): - print "\n***" - print "Tier:", i + 1 - print tier + print("\n***") + print("Tier:", i + 1) + print(tier) def demo(): # Each demo demonstrates different TextGrid formats. - print "Format 1" + print("Format 1") demo_TextGrid(demo_data1) - print "\nFormat 2" + print("\nFormat 2") demo_TextGrid(demo_data2) - print "\nFormat 3" + print("\nFormat 3") demo_TextGrid(demo_data3) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tag/tnt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/tag/tnt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/tag/tnt.py --- ../python3/nltk_contrib/nltk_contrib/tag/tnt.py (original) +++ ../python3/nltk_contrib/nltk_contrib/tag/tnt.py (refactored) @@ -66,8 +66,8 @@ f = None try: if verbose: - print 'Begin input file creation' - print 'input_file=%s' % input_file + print('Begin input file creation') + print('input_file=%s' % input_file) f = open(input_file, 'w') words = tokenize.WhitespaceTokenizer().tokenize(sentence) @@ -75,21 +75,21 @@ f.write('%s\n' % word) f.write('\n') f.close() - if verbose: print 'End input file creation' + if verbose: print('End input file creation') if verbose: - print 'tnt_bin=%s' % tnt_bin - print 'model_path=%s' % model_path - print 'output_file=%s' % output_file + print('tnt_bin=%s' % tnt_bin) + print('model_path=%s' % model_path) + print('output_file=%s' % output_file) execute_string = execute_string % (tnt_bin, model_path, input_file, output_file) if verbose: - print 'execute_string=%s' % execute_string + print('execute_string=%s' % execute_string) - if verbose: print 'Begin tagging' + if verbose: print('Begin tagging') tnt_exit = os.system(execute_string) - if verbose: print 'End tagging (exit code=%s)' % tnt_exit + if verbose: print('End tagging (exit code=%s)' % tnt_exit) f = open(output_file, 'r') lines = f.readlines() @@ -105,7 +105,7 @@ if verbose: for tag in tagged_words: - print tag + print(tag) finally: if f: f.close() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/tag/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/stringcomp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/stringcomp.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/stringcomp.py --- ../python3/nltk_contrib/nltk_contrib/stringcomp.py (original) +++ ../python3/nltk_contrib/nltk_contrib/stringcomp.py (refactored) @@ -89,20 +89,20 @@ def demo (): - print "Comparison between 'python' and 'python': %.2f" % stringcomp("python", "python") - print "Comparison between 'python' and 'Python': %.2f" % stringcomp("python", "Python") - print "Comparison between 'NLTK' and 'NTLK': %.2f" % stringcomp("NLTK", "NTLK") - print "Comparison between 'abc' and 'def': %.2f" % stringcomp("abc", "def") + print("Comparison between 'python' and 'python': %.2f" % stringcomp("python", "python")) + print("Comparison between 'python' and 'Python': %.2f" % stringcomp("python", "Python")) + print("Comparison between 'NLTK' and 'NTLK': %.2f" % stringcomp("NLTK", "NTLK")) + print("Comparison between 'abc' and 'def': %.2f" % stringcomp("abc", "def")) - print "Word most similar to 'australia' in list ['canada', 'brazil', 'egypt', 'thailand', 'austria']:" + print("Word most similar to 'australia' in list ['canada', 'brazil', 'egypt', 'thailand', 'austria']:") max_score = 0.0 ; best_match = None for country in ["canada", "brazil", "egypt", "thailand", "austria"]: score = stringcomp("australia", country) if score > max_score: best_match = country max_score = score - print "(comparison between 'australia' and '%s': %.2f)" % (country, score) - print "Word most similar to 'australia' is '%s' (score: %.2f)" % (best_match, max_score) + print("(comparison between 'australia' and '%s': %.2f)" % (country, score)) + print("Word most similar to 'australia' is '%s' (score: %.2f)" % (best_match, max_score)) if __name__ == "__main__": demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/seqclass.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/seqclass.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/seqclass.py --- ../python3/nltk_contrib/nltk_contrib/seqclass.py (original) +++ ../python3/nltk_contrib/nltk_contrib/seqclass.py (refactored) @@ -19,7 +19,7 @@ def classify(self, featuresets): if self.size() == 0: - raise ValueError, 'Tagger is not trained' + raise ValueError('Tagger is not trained') for i, featureset in enumerate(featuresets): @@ -91,7 +91,7 @@ stream = open(filename,'w') yaml.dump_all(training_data, stream) - print "Saving features to %s" % os.path.abspath(filename) + print("Saving features to %s" % os.path.abspath(filename)) stream.close() @@ -100,7 +100,7 @@ dict_corpus = tabular2dict(training_corpus, KEYS) contexts = self.contexts(dict_corpus) - print "Detecting features" + print("Detecting features") training_data = [(self.detect_features(c), c[1]['label']) for c in contexts] if save: @@ -118,11 +118,11 @@ Train a classifier. """ if self.size() != 0: - raise ValueError, 'Classifier is already trained' + raise ValueError('Classifier is already trained') training_data = self.corpus2training_data(training_corpus) - print "Training classifier" + print("Training classifier") self._model = iis(training_data) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/xmlhandler_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/xmlhandler_unittest.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/xmlhandler_unittest.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/xmlhandler.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/xmlhandler.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/xmlhandler.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/unittest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/unittest.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/unittest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/unittest.py (refactored) @@ -41,4 +41,4 @@ def TestUnitOutputs(unitname, gold_file, test_file): CompareOutputFiles(gold_file, test_file) - print '%s successful' % unitname + print('%s successful' % unitname) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/tokens.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/tokens.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/tokens.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/tokens.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/tokens.py (refactored) @@ -200,7 +200,7 @@ except KeyError: return self.InitTokenStats(tok) def TokenStats(self): - return self.tokstats_.values() + return list(self.tokstats_.values()) def SetN(self, n): self.n_ = n @@ -288,7 +288,7 @@ try: map[hash_string].append(token_) except KeyError: map[hash_string] = [token_] ntokens = [] - keys = map.keys() + keys = list(map.keys()) keys.sort() for k in keys: token_ = map[k][0] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp_unittest.py WARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp_unittest.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp_unittest.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp_unittest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp_unittest.py (refactored) @@ -32,11 +32,11 @@ import auxiliary_comp from __init__ import BASE_ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/token_comp.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_unittest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_unittest.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_unittest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_unittest.py (refactored) @@ -85,7 +85,7 @@ comparator.ComputeDistance() result = comparator.ComparisonResult() matches[(hash1, hash2)] = result - values = matches.values() + values = list(matches.values()) values.sort(lambda x, y: cmp(x.Cost(), y.Cost())) p = open(match_file, 'w') ## zero out the file p.close() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_extractor.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_extractor.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_extractor.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_extractor.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/thai_extractor.py (refactored) @@ -63,7 +63,7 @@ return None def Dump(self, file): - keys = self.table_.keys() + keys = list(self.table_.keys()) keys.sort(lambda x, y: cmp(self.table_[x], self.table_[y])) p = open(file, 'w') for k in keys: @@ -90,7 +90,7 @@ extraction. """ list = [] - for u in unicode(text, 'utf8'): + for u in str(text, 'utf8'): list.append(u.encode('utf8')) return list @@ -179,7 +179,7 @@ } self.snow_session_ = snow.SnowSession(snow.MODE_SERVER, snow_test_args) - try: utext = unicode(line.strip(), 'utf-8') + try: utext = str(line.strip(), 'utf-8') except TypeError: utext = line.strip() segments = utext.split() for segment in segments: @@ -189,9 +189,8 @@ seglist = Listify(segment.encode('utf8')) features = [] for i in range(len(seglist)): - feats = ', '.join(map(lambda x: str(x), - FeatureExtract(i, seglist, - self.feature_map_))) + ':\n' + feats = ', '.join([str(x) for x in FeatureExtract(i, seglist, + self.feature_map_)]) + ':\n' result = self.snow_session_.evaluateExample(feats) target, a, b, activation = result.split('\n')[1].split() target = int(target[:-1]) ## remove ':' + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/sample.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/sample.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/sample.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/sample.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/sample.py (refactored) @@ -105,7 +105,7 @@ comparator.ComputeDistance() result = comparator.ComparisonResult() matches[(hash1, hash2)] = result - values = matches.values() + values = list(matches.values()) values.sort(lambda x, y: cmp(x.Cost(), y.Cost())) p = open(MATCH_FILE_, 'w') ## zero out the file p.close() @@ -130,7 +130,7 @@ comparator.ComputeDistance() result = comparator.ComparisonResult() correlates[(hash1, hash2)] = result - values = correlates.values() + values = list(correlates.values()) values.sort(lambda x, y: cmp(x.Cost(), y.Cost())) p = open(CORR_FILE_, 'w') ## zero out the file p.close() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer_unittest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer_unittest.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer_unittest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer_unittest.py (refactored) @@ -57,9 +57,9 @@ for line in p: line = line.strip() word, pron = line.split('\t') - try: word = unicode(word, 'utf-8') + try: word = str(word, 'utf-8') except TypeError: pass - try: pron = unicode(pron, 'utf-8') + try: pron = str(pron, 'utf-8') except TypeError: pass try: GOLDEN_[word].AddPronunciation(pron) @@ -73,7 +73,7 @@ if output: file = open(GOLDEN_FILE_, 'w') else: LoadGolden() for w in WORDS_: - try: w = unicode(w.strip(), 'utf-8') + try: w = str(w.strip(), 'utf-8') except TypeError: pass token_ = tokens.Token(w) pronouncer_ = pronouncer.UnitranPronouncer(token_) @@ -89,7 +89,7 @@ file.write('%s\t%s\n' % (pronouncer_.Token().String(), p)) else: try: - string = unicode(pronouncer_.Token().String(), 'utf-8') + string = str(pronouncer_.Token().String(), 'utf-8') except TypeError: string = pronouncer_.Token().String() assert string in GOLDEN_, \ @@ -107,10 +107,10 @@ nprons[i], gprons[i]) if output: - print 'generated %s' % GOLDEN_FILE_ + print('generated %s' % GOLDEN_FILE_) file.close() else: - print '%s successful' % sys.argv[0] + print('%s successful' % sys.argv[0]) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/pronouncer.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer_unittest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer_unittest.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer_unittest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer_unittest.py (refactored) @@ -47,28 +47,28 @@ # train the perceptron pt.Train(dict[0:1000]) first_run = EvaluateExamples(pt) - print first_run + print(first_run) # results here should be the same second_run = EvaluateExamples(pt) - print second_run + print(second_run) # learn from new examples # produce new results pt.Retrain(dict[1001:3000]) third_run = EvaluateExamples(pt) - print third_run + print(third_run) # this result should be the same as the third run fourth_run = EvaluateExamples(pt) - print fourth_run + print(fourth_run) # test if first_run == second_run and first_run != third_run \ and third_run == fourth_run: - print 'unittest successful' + print('unittest successful') else: - print 'unsuccessful' + print('unsuccessful') # clean up pt.CleanUp() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron_trainer.py (refactored) @@ -92,7 +92,7 @@ """ # if the peceptron is already trained, warn and abort if self.snow_p_.IsTrained(): - if DEBUG_: print 'Perceptron already trained (use Retrain?)' + if DEBUG_: print('Perceptron already trained (use Retrain?)') return False for example in pos_examples_list: @@ -144,7 +144,7 @@ """ # if the perceptron has not been trained, warn and abort if not self.snow_p_.IsTrained(): - if DEBUG_: print 'Perceptron is not trained (use Train?)' + if DEBUG_: print('Perceptron is not trained (use Train?)') return False for example in new_positives: @@ -193,7 +193,7 @@ Return: a tuple of activated target and activation, in the order as mentioned. """ if not self.snow_p_.IsTrained(): - if DEBUG_: print 'Perceptron not trained' + if DEBUG_: print('Perceptron not trained') return False test_ex = Example(s_token, t_token) @@ -257,8 +257,7 @@ candidates = set(candidates) candidates = list(candidates) - distances = map(lambda x: - (x, Distance(x.split(), token2.split())), candidates) + distances = [(x, Distance(x.split(), token2.split())) for x in candidates] distances = sorted(distances, lambda x,y: x[1] - y[1]) for new_str in distances[1:5]: @@ -288,7 +287,7 @@ """ def __init__(self, l): self.l_ = l - self.left_els_ = map(lambda x: x[0], self.l_) + self.left_els_ = [x[0] for x in self.l_] def CreateShuffledList(self): shuffled_list = [] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/perceptron.py (refactored) @@ -170,7 +170,7 @@ """Dump the entire feature map to a file whose name is given as the parameter. """ fm_fp = open(feature_map_file, 'w') - for k, v in self.feature_dic_.iteritems(): + for k, v in self.feature_dic_.items(): fm_fp.write(k + '\t' + str(v) + '\n') fm_fp.close() return True + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/paper_example.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/paper_example.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/paper_example.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph_unittest.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph_unittest.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/morph.py (refactored) @@ -86,7 +86,7 @@ def Morphs(self, string): try: return self.morphs_[string] - except AttributeError, KeyError: return '' + except AttributeError as KeyError: return '' def LabelDoclist(self): assert self.initialized_ == True, 'Must Initialize() the analyzer!' + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/miner.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/miner.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/miner.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/miner.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/miner.py (refactored) @@ -170,7 +170,7 @@ result = comparator_.ComparisonResult() matches[(hash1, hash2)] = result did += 1 - values = matches.values() + values = list(matches.values()) values.sort(lambda x, y: comp(y.Cost(), x.Cost())) if pdump: sys.stderr.write('Dumping comparisons to %s...\n' % pdump) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/makeindex.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/makeindex.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/makeindex.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/makeindex.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/makeindex.py (refactored) @@ -13,13 +13,13 @@ import sys -print '' -print '' -print 'Pydoc for ScriptTranscriber' -print '' -print '' +print('') +print('') +print('Pydoc for ScriptTranscriber') +print('') +print('') for line in sys.stdin.readlines(): html = line.strip() - print '%s
' % (html, html) -print '' -print '' + print('%s
' % (html, html)) +print('') +print('') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/japanese_extractor.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/japanese_extractor.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/japanese_extractor.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/japanese_extractor.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/japanese_extractor.py (refactored) @@ -31,7 +31,7 @@ """ def LineSegment(self, line): - try: utext = unicode(line.strip(), 'utf-8') + try: utext = str(line.strip(), 'utf-8') except TypeError: utext = line.strip() word = [] for u in utext: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/filter_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/filter_unittest.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/filter_unittest.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/filter.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/filter.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/filter.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor_unittest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor_unittest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor_unittest.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor_unittest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor_unittest.py (refactored) @@ -64,7 +64,7 @@ 'Token %d differs: %s != %s' % (i, all_tokens[i].String(), GOLDEN_[i]) - print '%s successful' % sys.argv[0] + print('%s successful' % sys.argv[0]) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/extractor.py (refactored) @@ -66,7 +66,7 @@ ## Go 'word' by word to make this more robust to unicode decode ## errors. for w in line.split(): - try: ulinelist.append(unicode(w, 'utf-8')) + try: ulinelist.append(str(w, 'utf-8')) except UnicodeDecodeError: pass uline = ' '.join(ulinelist) clist = [] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/documents.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/documents.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/documents.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/documents.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/documents.py (refactored) @@ -124,7 +124,7 @@ def XmlDump(self, file=None, utf8=False): if file is None: - print '%s\n' % (self.XmlEncode(utf8)) + print('%s\n' % (self.XmlEncode(utf8))) return p = open(file, 'w') p.write('%s\n' % self.XmlEncode(utf8)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/def_pronouncers.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/def_pronouncers.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/def_pronouncers.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/chinese_extractor.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/chinese_extractor.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/chinese_extractor.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/chinese_extractor.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/chinese_extractor.py (refactored) @@ -1305,7 +1305,7 @@ """ def LineSegment(self, line): - try: utext = unicode(line.strip(), 'utf-8') + try: utext = str(line.strip(), 'utf-8') except TypeError: utext = line.strip() word = [] for u in utext: @@ -1331,7 +1331,7 @@ """ def LineSegment(self, line): - try: utext = unicode(line.strip(), 'utf-8') + try: utext = str(line.strip(), 'utf-8') except TypeError: utext = line.strip() for i in range(len(utext)): for k in [4, 3, 2]: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/auxiliary_comp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/auxiliary_comp.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/auxiliary_comp.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/auxiliary_comp.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/auxiliary_comp.py (refactored) @@ -72,7 +72,7 @@ def LookupString(chars, convert=False): pys = [] - for u in unicode(chars, 'utf8'): + for u in str(chars, 'utf8'): try: py = PINYIN_TABLE_[u.encode('utf8')] npy = [] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/alignpairsFST.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/alignpairsFST.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/alignpairsFST.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/alignpairsFST.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/alignpairsFST.py (refactored) @@ -68,12 +68,12 @@ row_label, costs = line.split(None,1) if genSymbols: symbols.append(row_label) if row_label not in symbols: - print "Error: label (%s) not in defined symbols list" % row_label + print("Error: label (%s) not in defined symbols list" % row_label) sys.exit(1) rows.append(row_label) costs = costs.split() if len(costs) != len(cols): - print 'Error: wrong number of costs on line %s' % line + print('Error: wrong number of costs on line %s' % line) sys.exit(1) for c in range(len(costs)): if costs[c] in ('inf', 'Inf', 'INF'): costs[c] = INF_ @@ -247,7 +247,7 @@ aln1, aln2, cost = AlignFSTs(binph1, binph2, binmatrix, syms) #aln1 = aln1.replace(EPSILON_, SHORT_EPS_) #aln2 = aln2.replace(EPSILON_, SHORT_EPS_) - print '%s\t%s\t%.6f' % (aln1, aln2, cost) + print('%s\t%s\t%.6f' % (aln1, aln2, cost)) ret = os.system('rm -f %s' % (binmatrix)) if ret != 0: sys.stderr.write('Error in rm\'ing matrix\n') @@ -255,8 +255,8 @@ if infile is not None: infp.close() def usage(called): - print '%s -m [-s ]' % (called), - print '[-i ]' + print('%s -m [-s ]' % (called), end=' ') + print('[-i ]') if __name__ == '__main__': try: @@ -282,6 +282,6 @@ infile = a if matfile is None: usage(sys.argv[0]) - print "Error: must provide a cost-matrix file." + print("Error: must provide a cost-matrix file.") sys.exit(2) main(matfile, symfile, infile) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/script.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/script.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/script.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/script.py (refactored) @@ -19,97 +19,97 @@ """ Return the script of a unicode codepoint, only considering those codepoints that correspond to characters of a script. """ - if u >= u'\u0000' and u <= u'\u007F': return 'Latin' - elif u >= u'\u0080' and u <= u'\u00FF': return 'Latin' - elif u >= u'\u0100' and u <= u'\u017F': return 'Latin' - elif u >= u'\u0180' and u <= u'\u024F': return 'Latin' - elif u >= u'\u0370' and u <= u'\u03FF': return 'Greek' - elif u >= u'\u0400' and u <= u'\u04FF': return 'Cyrillic' - elif u >= u'\u0500' and u <= u'\u052F': return 'Cyrillic' - elif u >= u'\u0530' and u <= u'\u058F': return 'Armenian' - elif u >= u'\u0590' and u <= u'\u05FF': return 'Hebrew' - elif u >= u'\u0600' and u <= u'\u06FF': return 'Arabic' - elif u >= u'\u0700' and u <= u'\u074F': return 'Syriac' - elif u >= u'\u0750' and u <= u'\u077F': return 'Arabic' - elif u >= u'\u0780' and u <= u'\u07BF': return 'Thaana' - elif u >= u'\u0900' and u <= u'\u097F': return 'Devanagari' - elif u >= u'\u0980' and u <= u'\u09FF': return 'Bengali' - elif u >= u'\u0A00' and u <= u'\u0A7F': return 'Gurmukhi' - elif u >= u'\u0A80' and u <= u'\u0AFF': return 'Gujarati' - elif u >= u'\u0B00' and u <= u'\u0B7F': return 'Oriya' - elif u >= u'\u0B80' and u <= u'\u0BFF': return 'Tamil' - elif u >= u'\u0C00' and u <= u'\u0C7F': return 'Telugu' - elif u >= u'\u0C80' and u <= u'\u0CFF': return 'Kannada' - elif u >= u'\u0D00' and u <= u'\u0D7F': return 'Malayalam' - elif u >= u'\u0D80' and u <= u'\u0DFF': return 'Sinhala' - elif u >= u'\u0E00' and u <= u'\u0E7F': return 'Thai' - elif u >= u'\u0E80' and u <= u'\u0EFF': return 'Lao' - elif u >= u'\u0F00' and u <= u'\u0FFF': return 'Tibetan' - elif u >= u'\u1000' and u <= u'\u109F': return 'Burmese' - elif u >= u'\u10A0' and u <= u'\u10FF': return 'Georgian' - elif u >= u'\u1100' and u <= u'\u11FF': return 'Hangul' - elif u >= u'\u1200' and u <= u'\u137F': return 'Ethiopic' - elif u >= u'\u1380' and u <= u'\u139F': return 'Ethiopic' - elif u >= u'\u13A0' and u <= u'\u13FF': return 'Cherokee' - elif u >= u'\u1400' and u <= u'\u167F': return 'UCS' - elif u >= u'\u1680' and u <= u'\u169F': return 'Ogham' - elif u >= u'\u16A0' and u <= u'\u16FF': return 'Runic' - elif u >= u'\u1700' and u <= u'\u171F': return 'Tagalog' - elif u >= u'\u1720' and u <= u'\u173F': return 'Hanunoo' - elif u >= u'\u1740' and u <= u'\u175F': return 'Buhid' - elif u >= u'\u1760' and u <= u'\u177F': return 'Tagbanwa' - elif u >= u'\u1780' and u <= u'\u17FF': return 'Khmer' - elif u >= u'\u1800' and u <= u'\u18AF': return 'Mongolian' - elif u >= u'\u1900' and u <= u'\u194F': return 'Limbu' - elif u >= u'\u1950' and u <= u'\u197F': return 'Tai Le' - elif u >= u'\u1980' and u <= u'\u19DF': return 'New Tai Lue' - elif u >= u'\u19E0' and u <= u'\u19FF': return 'Khmer' - elif u >= u'\u1A00' and u <= u'\u1A1F': return 'Buginese' - elif u >= u'\u1E00' and u <= u'\u1EFF': return 'Latin' - elif u >= u'\u1F00' and u <= u'\u1FFF': return 'Greek' - elif u >= u'\u2C00' and u <= u'\u2C5F': return 'Glagolitic' - elif u >= u'\u2C80' and u <= u'\u2CFF': return 'Coptic' - elif u >= u'\u2D00' and u <= u'\u2D2F': return 'Georgian' - elif u >= u'\u2D30' and u <= u'\u2D7F': return 'Tifinagh' - elif u >= u'\u2D80' and u <= u'\u2DDF': return 'Ethiopic' - elif u >= u'\u2E80' and u <= u'\u2EFF': return 'CJK' - elif u >= u'\u2F00' and u <= u'\u2FDF': return 'Kangxi Radicals' - elif u >= u'\u3040' and u <= u'\u309F': return 'Hiragana' - elif u >= u'\u30A0' and u <= u'\u30FF': return 'Katakana' - elif u >= u'\u3100' and u <= u'\u312F': return 'Bopomofo' - elif u >= u'\u3130' and u <= u'\u318F': return 'Hangul' - elif u >= u'\u3190' and u <= u'\u319F': return 'Kanbun' - elif u >= u'\u31A0' and u <= u'\u31BF': return 'Bopomofo' - elif u >= u'\u31F0' and u <= u'\u31FF': return 'Katakana' - elif u >= u'\u3300' and u <= u'\u33FF': return 'CJK' - elif u >= u'\u3400' and u <= u'\u4DBF': return 'CJK' - elif u >= u'\u4E00' and u <= u'\u9FFF': return 'CJK' - elif u >= u'\uA000' and u <= u'\uA48F': return 'Yi' - elif u >= u'\uA490' and u <= u'\uA4CF': return 'Yi' - elif u >= u'\uA800' and u <= u'\uA82F': return 'Syloti Nagri' - elif u >= u'\uAC00' and u <= u'\uD7AF': return 'Hangul' - elif u >= u'\uF900' and u <= u'\uFAFF': return 'CJK' - elif u >= u'\uFE30' and u <= u'\uFE4F': return 'CJK' - elif u >= u'\uFE70' and u <= u'\uFEFF': return 'Arabic' - elif u >= u'\u10000' and u <= u'\u1007F': return 'Linear B' - elif u >= u'\u10080' and u <= u'\u100FF': return 'Linear B' - elif u >= u'\u10300' and u <= u'\u1032F': return 'Old Italic' - elif u >= u'\u10330' and u <= u'\u1034F': return 'Gothic' - elif u >= u'\u10380' and u <= u'\u1039F': return 'Ugaritic' - elif u >= u'\u103A0' and u <= u'\u103DF': return 'Old Persian' - elif u >= u'\u10400' and u <= u'\u1044F': return 'Deseret' - elif u >= u'\u10450' and u <= u'\u1047F': return 'Shavian' - elif u >= u'\u10480' and u <= u'\u104AF': return 'Osmanya' - elif u >= u'\u10800' and u <= u'\u1083F': return 'Cypriot Syllabary' - elif u >= u'\u10A00' and u <= u'\u10A5F': return 'Kharoshthi' - elif u >= u'\u20000' and u <= u'\u2A6DF': return 'CJK' - elif u >= u'\u2F800' and u <= u'\u2FA1F': return 'CJK' + if u >= '\u0000' and u <= '\u007F': return 'Latin' + elif u >= '\u0080' and u <= '\u00FF': return 'Latin' + elif u >= '\u0100' and u <= '\u017F': return 'Latin' + elif u >= '\u0180' and u <= '\u024F': return 'Latin' + elif u >= '\u0370' and u <= '\u03FF': return 'Greek' + elif u >= '\u0400' and u <= '\u04FF': return 'Cyrillic' + elif u >= '\u0500' and u <= '\u052F': return 'Cyrillic' + elif u >= '\u0530' and u <= '\u058F': return 'Armenian' + elif u >= '\u0590' and u <= '\u05FF': return 'Hebrew' + elif u >= '\u0600' and u <= '\u06FF': return 'Arabic' + elif u >= '\u0700' and u <= '\u074F': return 'Syriac' + elif u >= '\u0750' and u <= '\u077F': return 'Arabic' + elif u >= '\u0780' and u <= '\u07BF': return 'Thaana' + elif u >= '\u0900' and u <= '\u097F': return 'Devanagari' + elif u >= '\u0980' and u <= '\u09FF': return 'Bengali' + elif u >= '\u0A00' and u <= '\u0A7F': return 'Gurmukhi' + elif u >= '\u0A80' and u <= '\u0AFF': return 'Gujarati' + elif u >= '\u0B00' and u <= '\u0B7F': return 'Oriya' + elif u >= '\u0B80' and u <= '\u0BFF': return 'Tamil' + elif u >= '\u0C00' and u <= '\u0C7F': return 'Telugu' + elif u >= '\u0C80' and u <= '\u0CFF': return 'Kannada' + elif u >= '\u0D00' and u <= '\u0D7F': return 'Malayalam' + elif u >= '\u0D80' and u <= '\u0DFF': return 'Sinhala' + elif u >= '\u0E00' and u <= '\u0E7F': return 'Thai' + elif u >= '\u0E80' and u <= '\u0EFF': return 'Lao' + elif u >= '\u0F00' and u <= '\u0FFF': return 'Tibetan' + elif u >= '\u1000' and u <= '\u109F': return 'Burmese' + elif u >= '\u10A0' and u <= '\u10FF': return 'Georgian' + elif u >= '\u1100' and u <= '\u11FF': return 'Hangul' + elif u >= '\u1200' and u <= '\u137F': return 'Ethiopic' + elif u >= '\u1380' and u <= '\u139F': return 'Ethiopic' + elif u >= '\u13A0' and u <= '\u13FF': return 'Cherokee' + elif u >= '\u1400' and u <= '\u167F': return 'UCS' + elif u >= '\u1680' and u <= '\u169F': return 'Ogham' + elif u >= '\u16A0' and u <= '\u16FF': return 'Runic' + elif u >= '\u1700' and u <= '\u171F': return 'Tagalog' + elif u >= '\u1720' and u <= '\u173F': return 'Hanunoo' + elif u >= '\u1740' and u <= '\u175F': return 'Buhid' + elif u >= '\u1760' and u <= '\u177F': return 'Tagbanwa' + elif u >= '\u1780' and u <= '\u17FF': return 'Khmer' + elif u >= '\u1800' and u <= '\u18AF': return 'Mongolian' + elif u >= '\u1900' and u <= '\u194F': return 'Limbu' + elif u >= '\u1950' and u <= '\u197F': return 'Tai Le' + elif u >= '\u1980' and u <= '\u19DF': return 'New Tai Lue' + elif u >= '\u19E0' and u <= '\u19FF': return 'Khmer' + elif u >= '\u1A00' and u <= '\u1A1F': return 'Buginese' + elif u >= '\u1E00' and u <= '\u1EFF': return 'Latin' + elif u >= '\u1F00' and u <= '\u1FFF': return 'Greek' + elif u >RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/script.py = '\u2C00' and u <= '\u2C5F': return 'Glagolitic' + elif u >= '\u2C80' and u <= '\u2CFF': return 'Coptic' + elif u >= '\u2D00' and u <= '\u2D2F': return 'Georgian' + elif u >= '\u2D30' and u <= '\u2D7F': return 'Tifinagh' + elif u >= '\u2D80' and u <= '\u2DDF': return 'Ethiopic' + elif u >= '\u2E80' and u <= '\u2EFF': return 'CJK' + elif u >= '\u2F00' and u <= '\u2FDF': return 'Kangxi Radicals' + elif u >= '\u3040' and u <= '\u309F': return 'Hiragana' + elif u >= '\u30A0' and u <= '\u30FF': return 'Katakana' + elif u >= '\u3100' and u <= '\u312F': return 'Bopomofo' + elif u >= '\u3130' and u <= '\u318F': return 'Hangul' + elif u >= '\u3190' and u <= '\u319F': return 'Kanbun' + elif u >= '\u31A0' and u <= '\u31BF': return 'Bopomofo' + elif u >= '\u31F0' and u <= '\u31FF': return 'Katakana' + elif u >= '\u3300' and u <= '\u33FF': return 'CJK' + elif u >= '\u3400' and u <= '\u4DBF': return 'CJK' + elif u >= '\u4E00' and u <= '\u9FFF': return 'CJK' + elif u >= '\uA000' and u <= '\uA48F': return 'Yi' + elif u >= '\uA490' and u <= '\uA4CF': return 'Yi' + elif u >= '\uA800' and u <= '\uA82F': return 'Syloti Nagri' + elif u >= '\uAC00' and u <= '\uD7AF': return 'Hangul' + elif u >= '\uF900' and u <= '\uFAFF': return 'CJK' + elif u >= '\uFE30' and u <= '\uFE4F': return 'CJK' + elif u >= '\uFE70' and u <= '\uFEFF': return 'Arabic' + elif u >= '\u10000' and u <= '\u1007F': return 'Linear B' + elif u >= '\u10080' and u <= '\u100FF': return 'Linear B' + elif u >= '\u10300' and u <= '\u1032F': return 'Old Italic' + elif u >= '\u10330' and u <= '\u1034F': return 'Gothic' + elif u >= '\u10380' and u <= '\u1039F': return 'Ugaritic' + elif u >= '\u103A0' and u <= '\u103DF': return 'Old Persian' + elif u >= '\u10400' and u <= '\u1044F': return 'Deseret' + elif u >= '\u10450' and u <= '\u1047F': return 'Shavian' + elif u >= '\u10480' and u <= '\u104AF': return 'Osmanya' + elif u >= '\u10800' and u <= '\u1083F': return 'Cypriot Syllabary' + elif u >= '\u10A00' and u <= '\u10A5F': return 'Kharoshthi' + elif u >= '\u20000' and u <= '\u2A6DF': return 'CJK' + elif u >= '\u2F800' and u <= '\u2FA1F': return 'CJK' else: return UNKNOWN_SCRIPT_ def StringToScript(string, encoding='utf8'): stats = {} - try: ustring = unicode(string, encoding) + try: ustring = str(string, encoding) except TypeError: ustring = string for u in ustring: if u.isspace(): continue @@ -126,13 +126,13 @@ def Lower(string, encoding='utf8'): try: - return unicode(string, encoding).lower().encode(encoding) + return str(string, encoding).lower().encode(encoding) except TypeError: return string.lower().encode(encoding) def Upper(string, encoding='utf8'): - return unicode(string, encoding).upper().encode(encoding) + return str(string, encoding).upper().encode(encoding) def SupportsCapitalization(string, encoding='utf8'): @@ -140,7 +140,7 @@ def IsCapitalized(string, encoding='utf8'): - try: ustring = unicode(string, encoding) + try: ustring = str(string, encoding) except TypeError: ustring = string if ustring.lower()[0] == ustring[0]: return False @@ -148,7 +148,7 @@ def IsPunctuation(character, encoding='utf-8'): - try: uchar = unicode(character, encoding) + try: uchar = str(character, encoding) except TypeError: uchar = character return unicodedata.category(uchar)[:1] == 'P' @@ -159,7 +159,7 @@ def HasPunctuation(word, encoding='utf-8'): haspunctuation = False - try: uword = unicode(word, encoding) + try: uword = str(word, encoding) except TypeError: uword = word for uchar in uword: if unicodedata.category(uchar)[:1] == 'P': @@ -170,7 +170,7 @@ def HasDigit(word, encoding='utf-8'): hasdigit = False - try: uword = unicode(word, encoding) + try: uword = str(word, encoding) except TypeError: uword = word for uchar in uword: if unicodedata.category(uchar) == 'Nd': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/latin.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/latin.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/latin.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/latin.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/latin.py (refactored) @@ -21,7 +21,7 @@ def LatinToWorldBet(string): output = [] some_success = False - for c in unicode(string, 'utf8'): + for c in str(string, 'utf8'): c = c.encode('utf-8') try: output.append(LATIN_[c]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi_new.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi_new.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi_new.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi_new.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi_new.py (refactored) @@ -41,7 +41,7 @@ output = [] some_success = False internal = False - for c in unicode(string, 'utf8'): + for c in str(string, 'utf8'): c = c.encode('utf-8') try: pron = KUNYOMI_[c] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/kunyomi.py (refactored) @@ -23,7 +23,7 @@ def KanjiToWorldBet(string): output = [] some_success = False - for c in unicode(string, 'utf8'): + for c in str(string, 'utf8'): c = c.encode('utf-8') try: output.append(KUNYOMI_[c]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/english.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/english.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/english.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/chinese.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/chinese.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/chinese.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/chinese.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Utils/chinese.py (refactored) @@ -23,7 +23,7 @@ def HanziToWorldBet(string): output = [] some_success = False - for c in unicode(string, 'utf8'): + for c in str(string, 'utf8'): c = c.encode('utf-8') try: output.append(MANDARIN_[c]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/unitran.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/unitran.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/unitran.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/unitran.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/unitran.py (refactored) @@ -41,7 +41,7 @@ p = open(file) for line in p.readlines(): line = line.split('\n') - unis = "u'\u%s'"%line[0] + unis = "u'\\u%s'"%line[0] uni = eval(unis) Indic.append(uni) return Indic @@ -54,9 +54,9 @@ p = open(file) for line in p.readlines(): line = line.split('\t') - prev = "u'\u%s'"%line[0] - cur = "u'\u%s'"%line[1] - comp = "u'\u%s'"%line[2].strip('\n') + prev = "u'\\u%s'"%line[0] + cur = "u'\\u%s'"%line[1] + comp = "u'\\u%s'"%line[2].strip('\n') pre = eval(prev) curr = eval(cur) comps = eval(comp) @@ -81,7 +81,7 @@ new = '' prev = None token = thaifix.ThaiFix(token) - try: utoken = unicode(token.strip() + ' ', 'utf8') + try: utoken = str(token.strip() + ' ', 'utf8') except UnicodeDecodeError: return token for c in utoken: if prev: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/thaifix.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/thaifix.py WARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/thaifix.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/thaifix.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/thaifix.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/thaifix.py (refactored) @@ -25,78 +25,78 @@ """ ThaiTable = { + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/mk_sampa_table.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/mk_sampa_table.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/mk_sampa_table.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/mk_sampa_table.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/mk_sampa_table.py (refactored) @@ -34,7 +34,7 @@ p = open('X_Tables.py', 'w') p.write('# coding=utf-8\n') p.write('TransTable = {\n') - keys = newTable.keys() + keys = list(newTable.keys()) keys.sort() for u in keys: xstring, utf = newTable[u] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Wb2Xs.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Wb2Xs.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Wb2Xs.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Utils/gentable.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Utils/gentable.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Utils/gentable.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Utils/gentable.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/Unitran/Utils/gentable.py (refactored) @@ -27,9 +27,9 @@ sys.stdout = open('Tables.py', 'w') -print '# coding=utf-8' +print('# coding=utf-8') -print "TransTable = {" +print("TransTable = {") for line in sys.stdin.readlines(): if not '#' in line: @@ -37,8 +37,8 @@ if len(line[0]) > 1: if len(line) == 1: worldbet = '(##)' else: worldbet = line[1] - unistring = "u'\u%s'" % line[0] + unistring = "u'\\u%s'" % line[0] uni = eval(unistring) utf8 = uni.encode('utf8') - print " %s : ['%s','%s']," % (unistring, worldbet, utf8) -print " }" + print(" %s : ['%s','%s']," % (unistring, worldbet, utf8)) +print(" }") + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/scripttranscriber/MinEditDist/mEdit.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/scripttranscriber/MinEditDist/mEdit.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/scripttranscriber/MinEditDist/mEdit.py --- ../python3/nltk_contrib/nltk_contrib/scripttranscriber/MinEditDist/mEdit.py (original) +++ ../python3/nltk_contrib/nltk_contrib/scripttranscriber/MinEditDist/mEdit.py (refactored) @@ -52,7 +52,7 @@ ## List of all features -featList = FCDic.keys() +featList = list(FCDic.keys()) LClist = ['L', 'C', 'D'] PClist = ['C', 'D'] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/rte/logicentail.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/rte/logicentail.py --- ../python3/nltk_contrib/nltk_contrib/rte/logicentail.py (original) +++ ../python3/nltk_contrib/nltk_contrib/rte/logicentail.py (refactored) @@ -40,13 +40,13 @@ if text_drs_list: text_ex = text_drs_list[0].simplify().toFol() else: - if verbose: print 'ERROR: No readings were generated for the Text' + if verbose: print('ERROR: No readings were generated for the Text') hyp_drs_list = glueclass.parse_to_meaning(hyp) if hyp_drs_list: hyp_ex = hyp_drs_list[0].simplify().toFol() else: - if verbose: print 'ERROR: No readings were generated for the Hypothesis' + if verbose: print('ERROR: No readings were generated for the Hypothesis') #1. proof T -> H #2. proof (BK & T) -> H @@ -56,27 +56,27 @@ #6. satisfy BK & T & H result = inference.Prover9().prove(hyp_ex, [text_ex]) - if verbose: print 'prove: T -> H: %s' % result + if verbose: print('prove: T -> H: %s' % result) if not result: bk = self._generate_BK(text, hyp, verbose) bk_exs = [bk_pair[0] for bk_pair in bk] if verbose: - print 'Generated Background Knowledge:' + print('Generated Background Knowledge:') for bk_ex in bk_exs: - print bk_ex + print(bk_ex) result = inference.Prover9().prove(hyp_ex, [text_ex]+bk_exs) - if verbose: print 'prove: (T & BK) -> H: %s' % result + if verbose: print('prove: (T & BK) -> H: %s' % result) if not result: consistent = self.check_consistency(bk_exs+[text_ex]) - if verbose: print 'consistency check: (BK & T): %s' % consistent + if verbose: print('consistency check: (BK & T): %s' % consistent) if consistent: consistent = self.check_consistency(bk_exs+[text_ex, hyp_ex]) - if verbose: print 'consistency check: (BK & T & H): %s' % consistent + if verbose: print('consistency check: (BK & T & H): %s' % consistent) return result @@ -98,8 +98,8 @@ hypbow = set(word.lower() for word in hyp) if verbose: - print 'textbow: %s' % textbow - print 'hypbow: %s' % hypbow + print('textbow: %s' % textbow) + print('hypbow: %s' % hypbow) if self.stop: textbow = textbow - self.stopwords @@ -225,9 +225,9 @@ tagger = RTEInferenceTagger() text = 'John see a car' - print 'Text: ', text + print('Text: ', text) hyp = 'John watch an auto' - print 'Hyp: ', hyp + print('Hyp: ', hyp) # text_ex = LogicParser().parse('exists e x y.(david(x) & own(e) & subj(e,x) & obj(e,y) & car(y))') # hyp_ex = LogicParser().parse('exists e x y.(david(x) & have(e) & subj(e,x) & obj(e,y) & auto(y))') @@ -237,17 +237,17 @@ if text_drs_list: text_ex = text_drs_list[0].simplify().toFol() else: - print 'ERROR: No readings were be generated for the Text' + print('ERROR: No readings were be generated for the Text') hyp_drs_list = glueclass.parse_to_meaning(hyp) if hyp_drs_list: hyp_ex = hyp_drs_list[0].simplify().toFol() else: - print 'ERROR: No readings were be generated for the Hypothesis' - - print 'Text: ', text_ex - print 'Hyp: ', hyp_ex - print '' + print('ERROR: No readings were be generated for the Hypothesis') + + print('Text: ', text_ex) + print('Hyp: ', hyp_ex) + print('') #1. proof T -> H #2. proof (BK & T) -> H @@ -257,67 +257,67 @@ #6. satisfy BK & T & H result = inference.Prover9().prove(hyp_ex, [text_ex]) - print 'prove: T -> H: %s' % result - if result: - print 'Logical entailmeRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/rte/logicentail.py nt\n' - else: - print 'No logical entailment\n' + print('prove: T -> H: %s' % result) + if result: + print('Logical entailment\n') + else: + print('No logical entailment\n') bk = tagger._generate_BK(text, hyp, verbose) bk_exs = [bk_pair[0] for bk_pair in bk] - print 'Generated Background Knowledge:' + print('Generated Background Knowledge:') for bk_ex in bk_exs: - print bk_ex - print '' + print(bk_ex) + print('') result = inference.Prover9().prove(hyp_ex, [text_ex]+bk_exs) - print 'prove: (T & BK) -> H: %s' % result - if result: - print 'Logical entailment\n' - else: - print 'No logical entailment\n' + print('prove: (T & BK) -> H: %s' % result) + if result: + print('Logical entailment\n') + else: + print('No logical entailment\n') # Check if the background knowledge axioms are inconsistent result = inference.Prover9().prove(assumptions=bk_exs+[text_ex]).prove() - print 'prove: (BK & T): %s' % result - if result: - print 'Inconsistency -> Entailment unknown\n' - else: - print 'No inconsistency\n' + print('prove: (BK & T): %s' % result) + if result: + print('Inconsistency -> Entailment unknown\n') + else: + print('No inconsistency\n') result = inference.Prover9().prove(assumptions=bk_exs+[text_ex, hyp_ex]) - print 'prove: (BK & T & H): %s' % result - if result: - print 'Inconsistency -> Entailment unknown\n' - else: - print 'No inconsistency\n' + print('prove: (BK & T & H): %s' % result) + if result: + print('Inconsistency -> Entailment unknown\n') + else: + print('No inconsistency\n') result = inference.Mace().build_model(assumptions=bk_exs+[text_ex]) - print 'satisfy: (BK & T): %s' % result - if result: - print 'No inconsistency\n' - else: - print 'Inconsistency -> Entailment unknown\n' + print('satisfy: (BK & T): %s' % result) + if result: + print('No inconsistency\n') + else: + print('Inconsistency -> Entailment unknown\n') result = inference.Mace().build_model(assumptions=bk_exs+[text_ex, hyp_ex]).build_model() - print 'satisfy: (BK & T & H): %s' % result - if result: - print 'No inconsistency\n' - else: - print 'Inconsistency -> Entailment unknown\n' + print('satisfy: (BK & T & H): %s' % result) + if result: + print('No inconsistency\n') + else: + print('Inconsistency -> Entailment unknown\n') def test_check_consistency(): a = LogicParser().parse('man(j)') b = LogicParser().parse('-man(j)') - print '%s, %s: %s' % (a, b, RTEInferenceTagger().check_consistency([a,b], True)) - print '%s, %s: %s' % (a, a, RTEInferenceTagger().check_consistency([a,a], True)) + print('%s, %s: %s' % (a, b, RTEInferenceTagger().check_consistency([a,b], True))) + print('%s, %s: %s' % (a, a, RTEInferenceTagger().check_consistency([a,a], True))) def tag(text, hyp): - print 'Text: ', text - print 'Hyp: ', hyp - print 'Entailment =', RTEInferenceTagger().tag_sentences(text, hyp, True) - print '' + print('Text: ', text) + print('Hyp: ', hyp) + print('Entailment =', RTEInferenceTagger().tag_sentences(text, hyp, True)) + print('') if __name__ == '__main__': # test_check_consistency() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/rte/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/refexpr/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/refexpr/util.py --- ../python3/nltk_contrib/nltk_contrib/refexpr/util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/refexpr/util.py (refactored) @@ -97,7 +97,7 @@ # Put the highest priority attributes next to the noun for attr in attr_prefs: if (attrs.count(attr) > 0): - if (handlers != None) and (handlers.has_key(attr)): + if (handlers != None) and (attr in handlers): attr_queue.insert(0, handlers[attr](desc_dict[attr])) else: attr_queue.insert(0, desc_dict[attr]) @@ -138,13 +138,13 @@ # There is a difference between generating the phrases: # "the box on the table" and "the table on which the box sits" if cur_rel[2] == target_id: - if (handlers != None) and (handlers.has_key(rel_desc)): + if (handlers != None) and (rel_desc in handlers): rel_desc = handlers[rel_desc](True) other_desc = generate_phrase_rel(other_attrs, attr_prefs, cur_rel[3], handlers, False) clauses.append("%s %s %s" % (target_desc, rel_desc, other_desc)) else: - if (handlers != None) and (handlers.has_key(rel_desc)): + if (handlers != None) and (rel_desc in handlers): rel_desc = handlers[rel_desc](False) other_desc = generate_phrase_rel(other_attrs, attr_prefs, cur_rel[2], handlers, False) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/relational.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/refexpr/relational.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/refexpr/relational.py --- ../python3/nltk_contrib/nltk_contrib/refexpr/relational.py (original) +++ ../python3/nltk_contrib/nltk_contrib/refexpr/relational.py (refactored) @@ -12,9 +12,9 @@ # See the License for the specific language governing permissions and # limitations under the License. -import constraint +from . import constraint from copy import copy, deepcopy -from util import validate_facts, Type, Rel, generate_phrase_rel +from .util import validate_facts, Type, Rel, generate_phrase_rel class _RelationalVar: """Internal class used to represent relational variables""" @@ -70,7 +70,7 @@ def __fact_replace(self, fact, to_replace, replace_with): """Replaces all occurrences of to_replace in fact with replace_with""" - return fact[:2] + map(lambda fact_id: replace_with if (not isinstance(fact_id, _RelationalVar) and fact_id == to_replace) else fact_id, fact[2:]) + return fact[:2] + [replace_with if (not isinstance(fact_id, _RelationalVar) and fact_id == to_replace) else fact_id for fact_id in fact[2:]] def __get_context_set(self, constraints, obj_var): """Returns a set of objects that fit the given constraints for obj_var""" @@ -183,11 +183,11 @@ rel = Relational(facts) obj_types = [f for f in facts if f[0] == Type] # Include types in the description for clarity handlers = { - "on" : lambda(lr): "on" if lr else "on which lies", - "in" : lambda(lr): "in" if lr else "in which lies" + "on" : lambda lr: "on" if lr else "on which lies", + "in" : lambda lr: "in" if lr else "in which lies" } # Generate an English description for each object for obj_id in ["c1", "c2", "c3", "b1", "b2", "t1", "t2", "f1"]: - print "%s: %s" % (obj_id, generate_phrase_rel(rel.describe(obj_id) + obj_types, ["color"], obj_id, handlers)) + print("%s: %s" % (obj_id, generate_phrase_rel(rel.describe(obj_id) + obj_types, ["color"], obj_id, handlers))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/incremental.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/refexpr/incremental.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/refexpr/incremental.py --- ../python3/nltk_contrib/nltk_contrib/refexpr/incremental.py (original) +++ ../python3/nltk_contrib/nltk_contrib/refexpr/incremental.py (refactored) @@ -1,7 +1,7 @@ import string from copy import copy, deepcopy -from util import validate_facts, Type, Rel, generate_phrase +from .util import validate_facts, Type, Rel, generate_phrase class Incremental: """ @@ -181,7 +181,7 @@ # Print English description for each object for obj_id in ["obj1", "obj2", "obj3"]: obj_type = [f for f in facts if f[0] == Type and f[2] == obj_id] # Include type for clarity - print "%s: %s" % (obj_id, generate_phrase(incr.describe(obj_id) + obj_type, ["color", "size"])) + print("%s: %s" % (obj_id, generate_phrase(incr.describe(obj_id) + obj_type, ["color", "size"]))) class Taxonomy: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/gre3d_facts.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/refexpr/gre3d_facts.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/refexpr/gre3d_facts.py --- ../python3/nltk_contrib/nltk_contrib/refexpr/gre3d_facts.py (original) +++ ../python3/nltk_contrib/nltk_contrib/refexpr/gre3d_facts.py (refactored) @@ -1,8 +1,8 @@ -from full_brevity import * -from incremental import * -from relational import * - -import util +from .full_brevity import * +from .incremental import * +from .relational import * + +from . import util def getFacts(): """ @@ -285,9 +285,9 @@ taxonomy = Taxonomy({}) handlers = { - "in_front_of": lambda(lr): "in front of", - "left_of": lambda(lr): "to the left of", - "right_of": lambda(lr): "to the right of" + "in_front_of": lambda lr: "in front of", + "left_of": lambda lr: "to the left of", + "right_of": lambda lr: "to the right of" } #Print out the referring expressions generated by each algorithm for each scene @@ -301,7 +301,7 @@ rel = Relational(facts[i]) desc_rel = rel.describe("r1") - print "%#02d,\"Full Brevity\",\"%s\"" % (i, util.generate_phrase(desc_fb, ranked_attrs)) - print "%#02d,\"Incremental\",\"%s\"" % (i, util.generate_phrase(desc_incr, ranked_attrs)) - print "%#02d,\"Relational\",\"%s\"" % (i, util.generate_phrase_rel(desc_rel, ranked_attrs, "r1", handlers)) - + print("%#02d,\"Full Brevity\",\"%s\"" % (i, util.generate_phrase(desc_fb, ranked_attrs))) + print("%#02d,\"Incremental\",\"%s\"" % (i, util.generate_phrase(desc_incr, ranked_attrs))) + print("%#02d,\"Relational\",\"%s\"" % (i, util.generate_phrase_rel(desc_rel, ranked_attrs, "r1", handlers))) + + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/full_brevity.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/refexpr/full_brevity.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/refexpr/full_brevity.py --- ../python3/nltk_contrib/nltk_contrib/refexpr/full_brevity.py (original) +++ ../python3/nltk_contrib/nltk_contrib/refexpr/full_brevity.py (refactored) @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -from util import validate_facts, Type, Rel, generate_phrase +from .util import validate_facts, Type, Rel, generate_phrase class FullBrevity: """ @@ -30,7 +30,7 @@ """ self.facts = facts self.object_ids = validate_facts(self.facts) - assert not any(map(lambda f: f == Rel, self.facts)), "Full Brevity does not support relationships" + assert not any([f == Rel for f in self.facts]), "Full Brevity does not support relationships" def describe(self, target_id): """ @@ -55,7 +55,7 @@ best_prop = None # Find the property that best constrains the distractors set - for prop_key in properties.keys(): + for prop_key in list(properties.keys()): prop_val = properties[prop_key] dist_set = [dist for dist in distractors if dist[prop_key][1] == prop_val[1]] if (best_set is None) or (len(dist_set) < len(best_set)): @@ -81,5 +81,5 @@ # Print English description for each object for obj_id in ["obj1", "obj2", "obj3"]: obj_type = [f for f in facts if f[0] == Type and f[2] == obj_id] # Include type for clarity - print "%s: %s" % (obj_id, generate_phrase(fb.describe(obj_id) + obj_type, ["color", "size"])) + print("%s: %s" % (obj_id, generate_phrase(fb.describe(obj_id) + obj_type, ["color", "size"]))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/drawers.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/refexpr/drawers.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/refexpr/drawers.py --- ../python3/nltk_contrib/nltk_contrib/refexpr/drawers.py (original) +++ ../python3/nltk_contrib/nltk_contrib/refexpr/drawers.py (refactored) @@ -1,8 +1,8 @@ from random import shuffle -from full_brevity import * -from relational import * -from incremental import * -from util import generate_phrase, generate_phrase_rel +from .full_brevity import * +from .relational import * +from .incremental import * +from .util import generate_phrase, generate_phrase_rel if __name__ == '__main__': # This data is based on the drawer pictures from Vienthen and Dale (2006) @@ -268,7 +268,7 @@ shuffle(facts, lambda: 0.0) - fb = FullBrevity(filter(lambda f: f[0] != Rel, facts)) + fb = FullBrevity([f for f in facts if f[0] != Rel]) rel = Relational(facts) #The ordered priority for using attributes, important for incremental algorithm ranked_attrs = ["color", "row", "col", "corner"] @@ -279,19 +279,19 @@ #defines how to turn these rules into English phrases handlers = { - "col": lambda(desc): "column %s" % desc, - "row": lambda(desc): "row %s" % desc, - "corner": lambda(desc): "corner", - "above": lambda(lr): "above" if lr else "below", - "below": lambda(lr): "below" if lr else "above", - "right": lambda(lr): "to the right of" if lr else "to the left of", - "left": lambda(lr): "to the left of" if lr else "to the right of" + "col": lambda desc: "column %s" % desc, + "row": lambda desc: "row %s" % desc, + "corner": lambda desc: "corner", + "above": lambda lr: "above" if lr else "below", + "below": lambda lr: "below" if lr else "above", + "right": lambda lr: "to the right of" if lr else "to the left of", + "left": lambda lr: "to the left of" if lr else "to the right of" } #Generate phrases with each algorithm and print to screen for i in range(1, 17): obj_id = "d%s" % i - print "%#02d,\"Full Brevity\",\"%s\"" % (i, generate_phrase(fb.describe(obj_id), ranked_attrs, handlers)) - print "%#02d,\"Relational\",\"%s\"" % (i, generate_phrase_rel(rel.describe(obj_id), ranked_attrs, obj_id, handlers)) - print "%#02d,\"Incremental\",\"%s\"" % (i, generate_phrase(incr.describe(obj_id), ranked_attrs, handlers)) - + print("%#02d,\"Full Brevity\",\"%s\"" % (i, generate_phrase(fb.describe(obj_id), ranked_attrs, handlers))) + print("%#02d,\"Relational\",\"%s\"" % (i, generate_phrase_rel(rel.describe(obj_id), ranked_attrs, obj_id, handlers))) + print("%#02d,\"Incremental\",\"%s\"" % (i, generate_phrase(incr.describe(obj_id), ranked_attrs, handlers))) + + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/constraint.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/refexpr/constraint.py --- ../python3/nltk_contrib/nltk_contrib/refexpr/constraint.py (original) +++ ../python3/nltk_contrib/nltk_contrib/refexpr/constraint.py (refactored) @@ -37,6 +37,7 @@ """ import random import copy +import collections __all__ = ["Problem", "Variable", "Domain", "Unassigned", "Solver", "BacktrackingSolver", "RecursiveBacktrackingSolver", @@ -126,17 +127,17 @@ @type domain: list, tuple, or instance of C{Domain} """ if variable in self._variables: - raise ValueError, "Tried to insert duplicated variable %s" % \ - repr(variable) + raise ValueError("Tried to insert duplicated variable %s" % \ + repr(variable)) if type(domain) in (list, tuple): domain = Domain(domain) elif isinstance(domain, Domain): domain = copy.copy(domain) else: - raise TypeError, "Domains must be instances of subclasses of "\ - "the Domain class" + raise TypeError("Domains must be instances of subclasses of "\ + "the Domain class") if not domain: - raise ValueError, "Domain is empty" + raise ValueError("Domain is empty") self._variables[variable] = domain def addVariables(self, variables, domain): @@ -184,11 +185,11 @@ @type variables: set or sequence of variables """ if not isinstance(constraint, Constraint): - if callable(constraint): + if isinstance(constraint, collections.Callable): constraint = FunctionConstraint(constraint) else: - raise ValueError, "Constraints must be instances of "\ - "subclasses of the Constraint class" + raise ValueError("Constraints must be instances of "\ + "subclasses of the Constraint class") self._constraints.append((constraint, variables)) def getSolution(self): @@ -259,7 +260,7 @@ def _getArgs(self): domains = self._variables.copy() - allvariables = domains.keys() + allvariables = list(domains.keys()) constraints = [] for constraint, variables in self._constraints: if not variables: @@ -274,7 +275,7 @@ for constraint, variables in constraints[:]: constraint.preProcess(variables, domains, constraints, vconstraints) - for domain in domains.values(): + for domain in list(domains.values()): domain.resetState() if not domain: return None, None, None @@ -368,8 +369,7 @@ constraints affecting the given variables. @type vconstraints: dict """ - raise NotImplementedError, \ - "%s is an abstract class" % self.__class__.__name__ + raise NotImplementedError("%s is an abstract class" % self.__class__.__name__) def getSolutions(self, domains, constraints, vconstraints): """ @@ -383,8 +383,7 @@ constraints affecting the given variables. @type vconstraints: dict """ - raise NotImplementedError, \ - "%s provides only a single solution" % self.__class__.__name__ + raise NotImplementedError("%s provides only a single solution" % self.__class__.__name__) def getSolutionIter(self, domains, constraints, vconstraints): """ @@ -398,8 +397,7 @@ constraints affecting the given variables. @type vconstraints: dict """ - raise NotImplementedError, \ - "%s doesn't provide iteration" % self.__class__.__name__ + raise NotImplementedError("%s doesn't provide iteration" % self.__class__.__name__) class BacktrackingSolver(Solver): """ @@ -514,12 +512,12 @@ # Push state before looking for next variable. queue.RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/refexpr/constraint.py append((variable, values, pushdomains)) - raise RuntimeError, "Can't happen" + raise RuntimeError("Can't happen") def getSolution(self, domains, constraints, vconstraints): iter = self.getSolutionIter(domains, constraints, vconstraints) try: - return iter.next() + return next(iter) except StopIteration: return None @@ -665,9 +663,9 @@ # Initial assignment for variable in domains: assignments[variable] = random.choice(domains[variable]) - for _ in xrange(self._steps): + for _ in range(self._steps): conflicted = False - lst = domains.keys() + lst = list(domains.keys()) random.shuffle(lst) for variable in lst: # Check if variable is not in conflict @@ -986,7 +984,7 @@ def __call__(self, variables, domains, assignments, forwardcheck=False, _unassigned=Unassigned): singlevalue = _unassigned - for value in assignments.values(): + for value in list(assignments.values()): if singlevalue is _unassigned: singlevalue = value elif value != singlevalue: @@ -1242,7 +1240,7 @@ def __call__(self, variables, domains, assignments, forwardcheck=False): # preProcess() will remove it. - raise RuntimeError, "Can't happen" + raise RuntimeError("Can't happen") def preProcess(self, variables, domains, constraints, vconstraints): set = self._set @@ -1277,7 +1275,7 @@ def __call__(self, variables, domains, assignments, forwardcheck=False): # preProcess() will remove it. - raise RuntimeError, "Can't happen" + raise RuntimeError("Can't happen") def preProcess(self, variables, domains, constraints, vconstraints): set = self._set + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/refexpr/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/referring.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/referring.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/referring.py --- ../python3/nltk_contrib/nltk_contrib/referring.py (original) +++ ../python3/nltk_contrib/nltk_contrib/referring.py (refactored) @@ -221,23 +221,23 @@ Object2 = {"type":"chihuahua", "size":"large", "colour":"white"} Object3 = {"type":"siamese-cat", "size":"small", "colour":"black"} - print "Given an entity defined as: " + print("Given an entity defined as: ") r = Object1 - print r + print(r) preferred_attrs = ["type", "colour", "size"] - print "In a set defined as: " + print("In a set defined as: ") contrast_set = [Object2, Object3] - print contrast_set + print(contrast_set) RE = IncrementalAlgorithm(KB, r, contrast_set, preferred_attrs).RE - print "The referring expression created to uniquely identify", - print "the referent is: " - print RE + print("The referring expression created to uniquely identify", end=' ') + print("the referent is: ") + print(RE) RE_string = "" for attr, val in RE: RE_string = val + " " + RE_string RE_string = "The " + RE_string - print "This can be surface-realized as:" - print RE_string + print("This can be surface-realized as:") + print(RE_string) if __name__ == "__main__": demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/urlextracter.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/readability/urlextracter.py WARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/readability/urlextracter.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/urlextracter.py --- ../python3/nltk_contrib/nltk_contrib/readability/urlextracter.py (original) +++ ../python3/nltk_contrib/nltk_contrib/readability/urlextracter.py (refactored) @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/textanalyzer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/readability/textanalyzer.py WARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/readability/textanalyzer.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/textanalyzer.py --- ../python3/nltk_contrib/nltk_contrib/readability/textanalyzer.py (original) +++ ../python3/nltk_contrib/nltk_contrib/readability/textanalyzer.py (refactored) @@ -3,9 +3,9 @@ import nltk.data from nltk.tokenize import * -import syllables_en -import syllables_no -from languageclassifier import * +from . import syllables_en +from . import syllables_no +from .languageclassifier import * import logging class textanalyzer(object): @@ -28,13 +28,13 @@ syllablesCount = self.countSyllables(words) complexwordsCount = self.countComplexWords(text) averageWordsPerSentence = wordCount/sentenceCount - print ' Language: ' + self.lang - print ' Number of characters: ' + str(charCount) - print ' Number of words: ' + str(wordCount) - print ' Number of sentences: ' + str(sentenceCount) - print ' Number of syllables: ' + str(syllablesCount) - print ' Number of complex words: ' + str(complexwordsCount) - print ' Average words per sentence: ' + str(averageWordsPerSentence) + print(' Language: ' + self.lang) + print(' Number of characters: ' + str(charCount)) + print(' Number of words: ' + str(wordCount)) + print(' Number of sentences: ' + str(sentenceCount)) + print(' Number of syllables: ' + str(syllablesCount)) + print(' Number of complex words: ' + str(complexwordsCount)) + print(' Average words per sentence: ' + str(averageWordsPerSentence)) #analyzeText = classmethod(analyzeText) @@ -127,12 +127,12 @@ def _setEncoding(self,text): try: - text = unicode(text, "utf8").encode("utf8") + text = str(text, "utf8").encode("utf8") except UnicodeError: try: - text = unicode(text, "iso8859_1").encode("utf8") + text = str(text, "iso8859_1").encode("utf8") except UnicodeError: - text = unicode(text, "ascii", "replace").encode("utf8") + text = str(text, "ascii", "replace").encode("utf8") return text #_setEncoding = classmethod(_setEncoding) @@ -153,9 +153,9 @@ # \nthe people, for the people, shall not perish from this earth." + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/syllables_no.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/readability/syllables_no.py WARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/readability/syllables_no.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/syllables_no.py --- ../python3/nltk_contrib/nltk_contrib/readability/syllables_no.py (original) +++ ../python3/nltk_contrib/nltk_contrib/readability/syllables_no.py (refactored) @@ -78,10 +78,10 @@ if line: toks = line.split() assert len(toks) == 2 - syllablesInFile[_stripWord(unicode(toks[0],"latin-1").encode("utf-8"))] = int(toks[1]) + syllablesInFile[_stripWord(str(toks[0],"latin-1").encode("utf-8"))] = int(toks[1]) def count(word): - word = unicode(word,"utf-8").encode("utf-8") + word = str(word,"utf-8").encode("utf-8") word = _stripWord(word) if not word: @@ -96,7 +96,7 @@ # Count vowel groups count = 0 prev_was_vowel = 0 + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/syllables_en.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/readability/syllables_en.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/syllables_en.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/readabilitytests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/readability/readabilitytests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/readabilitytests.py --- ../python3/nltk_contrib/nltk_contrib/readability/readabilitytests.py (original) +++ ../python3/nltk_contrib/nltk_contrib/readability/readabilitytests.py (refactored) @@ -1,4 +1,4 @@ -from textanalyzer import * +from .textanalyzer import * import math class ReadabilityTool: @@ -196,17 +196,17 @@ # print ' RIX : %.1f' % rix # print '*' * 70 - print "=" * 100 - print "Recommended tests for lang: %s" % self.lang - print "=" * 100 - for testname in self.tests_given_lang[self.lang].keys(): - print testname + " : %.2f" % self.tests_given_lang[self.lang][testname](text) - print "=" * 100 - print "Other tests: (Warning! Use with care)" - print "=" * 100 - for testname in self.tests_given_lang["all"].keys(): - if not self.tests_given_lang[self.lang].has_key(testname): - print testname + " : %.2f" % self.tests_given_lang["all"][testname](text) + print("=" * 100) + print("Recommended tests for lang: %s" % self.lang) + print("=" * 100) + for testname in list(self.tests_given_lang[self.lang].keys()): + print(testname + " : %.2f" % self.tests_given_lang[self.lang][testname](text)) + print("=" * 100) + print("Other tests: (Warning! Use with care)") + print("=" * 100) + for testname in list(self.tests_given_lang["all"].keys()): + if testname not in self.tests_given_lang[self.lang]: + print(testname + " : %.2f" % self.tests_given_lang["all"][testname](text)) def demo(self): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/languageclassifier.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/readability/languageclassifier.py --- ../python3/nltk_contrib/nltk_contrib/readability/languageclassifier.py (original) +++ ../python3/nltk_contrib/nltk_contrib/readability/languageclassifier.py (refactored) @@ -11,7 +11,7 @@ from nltk.corpus import stopwords -from urlextracter import URLextracter +from .urlextracter import URLextracter from sgmllib import * class NaiveBayes(object): @@ -72,12 +72,12 @@ values = file.split('/') lang = values[-2] - if not self.p_lang.has_key(lang): + if lang not in self.p_lang: self.p_lang[lang] = 0.0 self.p_lang[lang] += 1.0 - if not self.files.has_key(lang): + if lang not in self.files: self.files[lang] = [] f = open(file, 'r') @@ -85,35 +85,35 @@ f.close() # Calculate probabilities - for lang in self.p_lang.keys(): + for lang in list(self.p_lang.keys()): self.p_lang[lang] /= len(self.training_files) self.vocabulary = self.__createVocabulary(self.files) # Calculate P(O | H) p_word_given_lang = self.p_word_given_lang - for lang in self.files.keys(): + for lang in list(self.files.keys()): p_word_given_lang[lang] = {} - for word in self.vocabulary[lang].keys(): + for word in list(self.vocabulary[lang].keys()): p_word_given_lang[lang][word] = 1.0 for word in self.files[lang]: - if self.vocabulary[lang].has_key(word): + if word in self.vocabulary[lang]: p_word_given_lang[lang][word] += 1.0 - for word in self.vocabulary[lang].keys(): + for word in list(self.vocabulary[lang].keys()): p_word_given_lang[lang][word] /= len(self.files[lang]) + len(self.vocabulary[lang]) - print "Training finished...(training-set of size %d)" % len(self.training_files) + print("Training finished...(training-set of size %d)" % len(self.training_files)) self.p_word_given_lang = p_word_given_lang - self.candidate_languages = self.files.keys() + self.candidate_languages = list(self.files.keys()) # Save result as a file output = open(os.path.join("files","lang_data.pickle"),'wb') data = {} data["p_word_given_lang"] = p_word_given_lang - data["canidate_languages"] = self.files.keys() + data["canidate_languages"] = list(self.files.keys()) data["p_lang"] = self.p_lang data["vocabulary"] = self.vocabulary pickler = pickle.dump(data, output, -1) @@ -128,16 +128,16 @@ """ # Count number of occurance of each word word_count = {} - for lang in files.keys(): + for lang in list(files.keys()): for word in files[lang]: - if not word_count.has_key(word): + if word not in word_count: word_count[word] = 0 word_count[word] += 1 vocabulary = {} vocabulary['eng'] = {} vocabulary['no'] = {} - for word in word_count.keys(): + for word in list(word_count.keys()): if word_count[word] > 2: if word != '': if not word in self.nor_stopwords: @@ -155,7 +155,7 @@ """ if test_files == "": - print "No test files given" + print("No test files given") return elif os.path.isdir(str(test_files)): self.test_files = glob.glob(test_files + "/*/*") @@ -186,7 +186,7 @@ # Calculates P(O | H) * P(H) for candidate group p = math.log(self.p_lang[candidate_lang]) for word in file_to_be_classified: - if self.vocabulary[candidate_lang].has_key(word): + if word iWARNING: couldn't encode ../python3/nltk_contrib/nltk_contrib/readability/languageclassifier.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/languageclassifier.py n self.vocabulary[candidate_lang]: p += math.log(self.p_word_given_lang[candidate_lang][word]) if p > max_p or max_p == 1: @@ -196,10 +196,10 @@ total += 1.0 if true_lang != max_lang: errors += 1.0 - print "Classifying finished...(test-set of size %d)" % len(self.test_files) - print "Errors %d" % errors - print "Total %d" % total - print "Accuracy: %.3f" % (1.0 - errors/total) + print("Classifying finished...(test-set of size %d)" % len(self.test_files)) + print("Errors %d" % errors) + print("Total %d" % total) + print("Accuracy: %.3f" % (1.0 - errors/total)) def classifyText(self, text): """ @@ -219,7 +219,7 @@ unknown_words = [] known_words = [] for word in words: - if self.vocabulary[candidate_lang].has_key(word): + if word in self.vocabulary[candidate_lang]: p += math.log(self.p_word_given_lang[candidate_lang][word]) if word not in known_words: known_words.append(word) @@ -241,7 +241,7 @@ def classifyURL(self, url): ue = URLextracter(url) - print 'Classifying %s' % url + print('Classifying %s' % url) content = ue.output() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/crawler.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/readability/crawler.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/crawler.py --- ../python3/nltk_contrib/nltk_contrib/readability/crawler.py (original) +++ ../python3/nltk_contrib/nltk_contrib/readability/crawler.py (refactored) @@ -3,7 +3,7 @@ import random import os,re -from urlextracter import * +from .urlextracter import * from sgmllib import * class Crawler: @@ -13,13 +13,13 @@ def crawl(self,url): self.current = url - print "Crawling " + url + print("Crawling " + url) try: ue = URLextracter(url) except SGMLParseError: - print "This URL contains error that can't be handled by this app.\nSorry!" - print "=" * 30 - print "Trying new random URL" + print("This URL contains error that can't be handled by this app.\nSorry!") + print("=" * 30) + print("Trying new random URL") self.crawl(self.urls[random.randint(1,len(self.urls))]) return @@ -30,7 +30,7 @@ filename += part + "." filename += "txt" - print "Stored as: " + filename + print("Stored as: " + filename) urls = "" try: # Set the path of where to store your data @@ -41,7 +41,7 @@ if len(content) > 2: # Minimum 3 words try: - textToWrite = unicode("".join(content)) + textToWrite = str("".join(content)) except UnicodeDecodeError: textToWrite = str("".join(content)) f.write(textToWrite) @@ -50,9 +50,9 @@ # Set this path to same as storage path os.remove("/path/to/saved/data/lang/%s" % filename) urls = ue.linklist - print "" + url + " mined!" + print("" + url + " mined!") except IOError: - print "Mined, but failed to store as file.\nSkipping this, going on to next!" + print("Mined, but failed to store as file.\nSkipping this, going on to next!") urls = self.urls ok_urls = [] for i in urls: @@ -68,12 +68,12 @@ if len(ok_urls) < 2: ok_urls = self.crawled unique = True # Fake true - print str(len(ok_urls)) + print(str(len(ok_urls))) else: unique = False next = random.randint(1,len(ok_urls)-1) - print next + print(next) new_url = ok_urls[next] while not unique: next = random.randint(1,len(ok_urls)-1) @@ -86,7 +86,7 @@ new_url = ok_urls[next] unique = True else: - print "Already crawled " + new_url + print("Already crawled " + new_url) ok_urls.remove(new_url) if len(ok_urls) < 2: ok_urls = self.crawled + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/readability/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/readability/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/readability/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/rdf/rdfvizualize.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/rdf/rdfvizualize.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/rdf/rdfvizualize.py --- ../python3/nltk_contrib/nltk_contrib/rdf/rdfvizualize.py (original) +++ ../python3/nltk_contrib/nltk_contrib/rdf/rdfvizualize.py (refactored) @@ -66,7 +66,7 @@ # add subjects and objects as nodes in the Dot instance for s, o in self.graph.subject_objects(): for uri in s, o: - if uri not in nodes.keys(): + if uri not in list(nodes.keys()): # generate a new node identifier node_id = "n%03d" % count nodes[uri] = node_id @@ -121,9 +121,9 @@ try: store = ConjunctiveGraph() store.parse(FILE, format='xml') - print store.serialize(format='xml') + print(store.serialize(format='xml')) except OSError: - print "Cannot read file '%s'" % FILE + print("Cannot read file '%s'" % FILE) def make_dot_demo(infile): try: @@ -133,13 +133,13 @@ v = Visualizer(store) g = v.graph2dot(filter_edges=True) g.write('%s.dot' % basename) - print "Wrote '%s.dot'" % basename + print("Wrote '%s.dot'" % basename) g.write_png('%s.png' % basename, prog='dot') - print "Wrote '%s.png'" % basename + print("Wrote '%s.png'" % basename) g.write_svg('%s.svg' % basename, prog='dot') - print "Wrote '%s.svg'" % basename + print("Wrote '%s.svg'" % basename) except OSError: - print "Cannot read file '%s'" % FILE + print("Cannot read file '%s'" % FILE) def main(): @@ -169,9 +169,9 @@ #print '*' * 30 #serialize_demo() - print - print "Visualise an rdf graph with Graphviz" - print '*' * 30 + print() + print("Visualise an rdf graph with Graphviz") + print('*' * 30) make_dot_demo(infile) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/rdf/rdfquery.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/rdf/rdfquery.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/rdf/rdfquery.py --- ../python3/nltk_contrib/nltk_contrib/rdf/rdfquery.py (original) +++ ../python3/nltk_contrib/nltk_contrib/rdf/rdfquery.py (refactored) @@ -86,7 +86,7 @@ semrep = sem.root_semrep(tree) trans = SPARQLTranslator() trans.translate(semrep) - print trans.query + print(trans.query) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/rdf/rdf.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/rdf/rdf.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/rdf/rdf.py --- ../python3/nltk_contrib/nltk_contrib/rdf/rdf.py (original) +++ ../python3/nltk_contrib/nltk_contrib/rdf/rdf.py (refactored) @@ -27,7 +27,7 @@ object = sym2uri(ns, reldict['objclass'], reldict['objsym']) triple = (subject, predicate, object) if verbose: - print triple + print(triple) return triple def make_rdfs(ns, reldict): @@ -47,7 +47,7 @@ """ Build a URI out of a base, a class term, and a symbol. """ - from urllib import quote + from urllib.parse import quote from rdflib import Namespace rdfclass = class_abbrev(rdfclass) rdfclass = rdfclass.lower() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/train.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/train.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/train.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/tagparse.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/tagparse.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/tagparse.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/tagparse.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/tagparse.py (refactored) @@ -1,6 +1,6 @@ from nltk.parse import chart from nltk import cfg -from drawchart import ChartDemo +from .drawchart import ChartDemo from nltk.tokenize.regexp import wordpunct #from nltk_contrib.mit.six863.kimmo import * import re, pickle @@ -27,7 +27,7 @@ match = re.match(r"PREFIX\('.*?'\)(.*?)\(.*", feat) if match: pos = match.groups()[0] else: pos = feat.split('(')[0] - print surface, pos + print(surface, pos) leafedge = chart.LeafEdge(word, i) thechart.insert(chart.TreeEdge((i, i+1), cfg.Nonterminal(pos), [word], dot=1), [leafedge]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/drawchart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/drawchart.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/drawchart.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/drawchart.py (refactored) @@ -39,8 +39,8 @@ # widget system. import pickle -from tkFileDialog import asksaveasfilename, askopenfilename -import Tkinter, tkFont, tkMessageBox +from tkinter.filedialog import asksaveasfilename, askopenfilename +import tkinter, tkinter.font, tkinter.messagebox import math import os.path @@ -103,12 +103,12 @@ self._selected_cell = None if toplevel: - self._root = Tkinter.Toplevel(parent) + self._root = tkinter.Toplevel(parent) self._root.title(title) self._root.bind('', self.destroy) self._init_quit(self._root) else: - self._root = Tkinter.Frame(parent) + self._root = tkinter.Frame(parent) self._init_matrix(self._root) self._init_list(self._root) @@ -124,18 +124,18 @@ self.draw() def _init_quit(self, root): - quit = Tkinter.Button(root, text='Quit', command=self.destroy) + quit = tkinter.Button(root, text='Quit', command=self.destroy) quit.pack(side='bottom', expand=0, fill='none') def _init_matrix(self, root): - cframe = Tkinter.Frame(root, border=2, relief='sunken') + cframe = tkinter.Frame(root, border=2, relief='sunken') cframe.pack(expand=0, fill='none', padx=1, pady=3, side='top') - self._canvas = Tkinter.Canvas(cframe, width=200, height=200, + self._canvas = tkinter.Canvas(cframe, width=200, height=200, background='white') self._canvas.pack(expand=0, fill='none') def _init_numedges(self, root): - self._numedges_label = Tkinter.Label(root, text='0 edges') + self._numedges_label = tkinter.Label(root, text='0 edges') self._numedges_label.pack(expand=0, fill='none', side='top') def _init_list(self, root): @@ -212,8 +212,8 @@ except: pass def _fire_callbacks(self, event, *args): - if not self._callbacks.has_key(event): return - for cb_func in self._callbacks[event].keys(): cb_func(*args) + if event not in self._callbacks: return + for cb_func in list(self._callbacks[event].keys()): cb_func(*args) def select_cell(self, i, j): if self._root is None: return @@ -274,9 +274,9 @@ # Labels and dotted lines for i in range(N): c.create_text(LEFT_MARGIN-2, i*dy+dy/2+TOP_MARGIN, - text=`i`, anchor='e') + text=repr(i), anchor='e') c.create_text(i*dx+dx/2+LEFT_MARGIN, N*dy+TOP_MARGIN+1, - text=`i`, anchor='n') + text=repr(i), anchor='n') c.create_line(LEFT_MARGIN, dy*(i+1)+TOP_MARGIN, dx*N+LEFT_MARGIN, dy*(i+1)+TOP_MARGIN, dash='.') c.create_line(dx*i+LEFT_MARGIN, TOP_MARGIN, @@ -327,21 +327,21 @@ self._selectbox = None if toplevel: - self._root = Tkinter.Toplevel(parent) + self._root = tkinter.Toplevel(parent) self._root.title('Chart Parsing Demo: Results') self._root.bind('', self.destroy) else: - self._root = Tkinter.Frame(parent) + self._root = tkinter.Frame(parent) # Buttons if toplevel: - buttons = Tkinter.Frame(self._root) + buttons = tkinter.Frame(self._root) buttons.pack(side='bottom', expand=0, fill='x') - Tkinter.Button(buttons, text='Quit', + tkinter.Button(buttons, text='Quit', command=self.destroy).pack(side='right') - Tkinter.Button(buttons, text='Print All', + tkinter.Button(buttons, text='Print All', command=self.print_all).pack(side='left') - Tkinter.Button(buttons, text='Print Selection', + tkinter.Button(buttons, text='Print Selection', command=self.print_selection).pack(side='left') # Canvas frame. @@ -404,7 +404,7 @@ def print_selection(self, *e): if self._root is None: return if self._selection is None: - tkMessageBox.showerror('Print Error', 'No tree selected') + tkinter.messagebox.showerror('Print Error', 'No tree selected') else: c = self._cframe.canvas() for widget in self._treewidgets: @@ -509,7 +509,7 @@ self._operator = None # Set up the root window. - self._root = Tkinter.Tk() + self._root = tkinter.Tk() self._root.title('Chart Comparison') self._root.bind('', self.destroy) self._root.bind('', self.destroy) @@ -540,10 +540,10 @@ #//////////////////////////////////////////////////////////// def _init_menubar(self, root): - menubar = Tkinter.Menu(root) + menubar = tkinter.Menu(root) # File menu - filemenu = Tkinter.Menu(menubar, tearoff=0) + filemenu = tkinter.Menu(menubar, tearoff=0) filemenu.add_command(label='Load Chart', accelerator='Ctrl-o', underline=0, command=self.load_chart_dialog) filemenu.add_command(label='Save Output', accelerator='Ctrl-s', @@ -554,7 +554,7 @@ menubar.add_cascade(label='File', underline=0, menu=filemenu) # Compare menu - opmenu = Tkinter.Menu(menubar, tearoff=0) + opmenu = tkinter.Menu(menubar, tearoff=0) opmenu.add_command(label='Intersection', command=self._intersection, accelerator='+') @@ -573,21 +573,21 @@ self._root.config(menu=menubar) def _init_divider(self, root): - divider = Tkinter.Frame(root, border=2, relief='sunken') + divider = tkinter.Frame(root, border=2, relief='sunken') divider.pack(side='top', fill='x', ipady=2) def _init_chartviews(self, root): opfont=('symbol', -36) # Font for operator. eqfont=('helvetica', -36) # Font for equals sign. - frame = Tkinter.Frame(root, background='#c0c0c0') + frame = tkinter.Frame(root, background='#c0c0c0') frame.pack(side='top', expand=1, fill='both') # The left matrix. - cv1_frame = Tkinter.Frame(frame, border=3, relief='groove') + cv1_frame = tkinter.Frame(frame, border=3, relief='groove') cv1_frame.pack(side='left', padx=8, pady=7, expand=1, fill='both') self._left_selector = MutableOptionMenu( - cv1_frame, self._charts.keys(), command=self._select_left) + cv1_frame, list(self._charts.keys()), command=self._select_left) self._left_selector.pack(side='top', pady=5, fill='x') self._left_matrix = ChartMatrixView(cv1_frame, self._emptychart, toplevel=False, @@ -599,15 +599,15 @@ self._left_matrix.inactivate() # The operator. - self._op_label = Tkinter.Label(frame, text=' ', width=3, + self._op_label = tkinter.Label(frame, text=' ', width=3, background='#c0c0c0', font=opfont) self._op_label.pack(side='left', padx=5, pady=5) # The right matrix. - cv2_frame = Tkinter.Frame(frame, border=3, relief='groove') + cv2_frame = tkinter.Frame(frame, border=3, relief='groove') cv2_frame.pack(side='left', padx=8, pady=7, expand=1, fill='both') self._right_selector = MutableOptionMenu( - cv2_frame, self._charts.keys(), command=self._select_right) + cv2_frame, list(self._charts.keys()), command=self._select_right) self._right_selector.pack(side='top', pady=5, fill='x') self._right_matrix = ChartMatrixView(cv2_frame, self._emptychart, toplevel=False, @@ -619,13 +619,13 @@ self._right_matrix.inactivate() # The equals sign - Tkinter.Label(frame, text='=', width=3, background='#c0c0c0', + tkinter.Label(frame, text='=', width=3, background='#c0c0c0', font=eqfont).pack(side='left', padx=5, pady=5) # The output matrix. - out_frame = Tkinter.Frame(frame, border=3, relief='groove') + out_frame = tkinter.Frame(frame, border=3, relief='groove') out_frame.pack(side='left', padx=8, pady=7, expand=1, fill='both') - self._out_label = Tkinter.Label(out_frame, text='Output') + self._out_label = tkinter.Label(out_frame, text='Output') self._out_label.pack(side='top', pady=9) self._out_matrix = ChartMatrixView(out_frame, self._emptychart, toplevel=False, @@ -637,19 +637,19 @@ self._out_matrix.inactivate() def _init_buttons(self, root): - buttons = Tkinter.Frame(root) + buttons = tkinter.Frame(root) buttons.pack(side='bottom', pady=5, fill='x', expand=0) - Tkinter.Button(buttons, text='Intersection', + tkinter.Button(buttons, text='Intersection', command=self._intersection).pack(side='left') - Tkinter.Button(buttons, text='Union', + tkinter.Button(buttons, text='Union', command=self._union).pack(side='left') - Tkinter.Button(buttons, text='Difference', + tkinter.Button(buttons, text='Difference', command=self._difference).pack(side='left') - Tkinter.Frame(buttons, width=20).pack(side='left') - Tkinter.Button(buttons, text='Swap Charts', + tkinter.Frame(buttons, width=20).pack(side='left') + tkinter.Button(buttons, text='Swap Charts', command=self._swapcharts).pack(side='left') - Tkinter.Button(buttons, text='Detatch Output', + tkinter.Button(buttons, text='Detatch Output', command=self._detatch_out).pack(side='right') def _init_bindings(self, root): @@ -692,8 +692,8 @@ defaultextension='.pickle') if not filename: return try: pickle.dump((self._out_chart), open(filename, 'w')) - except Exception, e: - tkMessageBox.showerror('Error Saving Chart', + except Exception as e: + tkinter.messagebox.showerror('Error Saving Chart', 'Unable to open file: %r\n%s' % (filename, e)) @@ -702,8 +702,8 @@ defaultextension='.pickle') if not filename: return try: self.load_chart(filename) - except Exception, e: - tkMessageBox.showerror('Error Loading Chart', + except Exception as e: + tkinter.messagebox.showerror('Error Loading Chart', 'Unable to open file: %r\n%s' % (filename, e)) @@ -925,12 +925,12 @@ # If they didn't provide a main window, then set one up. if root is None: - top = Tkinter.Tk() + top = tkinter.Tk() top.title('Chart View') def destroy1(e, top=top): top.destroy() def destroy2(top=top): top.destroy() top.bind('q', destroy1) - b = Tkinter.Button(top, text='Done', command=destroy2) + b = tkinter.Button(top, text='Done', command=destroy2) b.pack(side='bottom') self._root = top else: @@ -947,9 +947,9 @@ # Create the sentence canvas. if draw_sentence: - cframe = Tkinter.Frame(self._root, relief='sunk', border=2) + cframe = tkinter.Frame(self._root, relief='sunk', border=2) cframe.pack(fill='both', side='bottom') - self._sentence_canvas = Tkinter.Canvas(cframe, height=50) + self._sentence_canvas = tkinter.Canvas(cframe, height=50) self._sentence_canvas['background'] = '#e0e0e0' self._sentence_canvas.pack(fill='both') #self._sentence_canvas['height'] = self._sentence_height @@ -976,12 +976,12 @@ def _init_fonts(self, root): - self._boldfont = tkFont.Font(family='helvetica', weight='bold', + self._boldfont = tkinter.font.Font(family='helvetica', weight='bold', size=self._fontsize) - self._font = tkFont.Font(family='helvetica', + self._font = tkinter.font.Font(family='helvetica', size=self._fontsize) # See: - self._sysfont = tkFont.Font(font=Tkinter.Button()["font"]) + self._sysfont = tkinter.font.Font(font=tkinter.Button()["font"]) root.option_add("*Font", self._sysfont) def _sb_canvas(self, root, expand='y', @@ -989,12 +989,12 @@ """ Helper for __init__: construct a canvas with a scrollbar. """ - cframe =Tkinter.Frame(root, relief='sunk', border=2) + cframe =tkinter.Frame(root, relief='sunk', border=2) cframe.pack(fill=fill, expand=expand, side=side) - canvas = Tkinter.Canvas(cframe, background='#e0e0e0') + canvas = tkinter.Canvas(cframe, background='#e0e0e0') # Give the canvas a scrollbar. - sb = Tkinter.Scrollbar(cframe, orient='vertical') + sb = tkinter.Scrollbar(cframe, orient='vertical') sb.pack(side='right', fill='y') canvas.pack(side='left', fill=fill, expand='yes') @@ -1079,7 +1079,7 @@ self._resize() else: for edge in self._chart: - if not self._edgetags.has_key(edge): + if edge not in self._edgetags: self._add_edge(edge) self._resize() @@ -1139,7 +1139,7 @@ - Find an available level - Call _draw_edge """ - if self._edgetags.has_key(edge): return + if edge in self._edgetags: return self._analyze_edge(edge) self._grow() @@ -1246,11 +1246,11 @@ If no colors are specified, use intelligent defaults (dependant on selection, etc.) """ - if not self._edgetags.has_key(edge): return + if edge not in self._edgetags: return c = self._chart_canvas if linecolor is not None and textcolor is not None: - if self._marks.has_key(edge): + if edge in self._marks: linecolor = self._marks[edge] tags = self._edgetags[edge] c.itemconfig(tags[0], fill=linecolor) @@ -1262,7 +1262,7 @@ return else: N = self._chart.num_leaves() - if self._marks.has_key(edge): + if edge in self._marks: self._color_edge(self._marks[edge]) if (edge.is_complete() and edge.span() == (0, N)): self._color_edge(edge, '#084', '#042') @@ -1283,7 +1283,7 @@ Unmark an edge (or all edges) """ if edge == None: - old_marked_edges = self._marks.keys() + old_marked_edges = list(self._marks.keys()) self._marks = {} for edge in old_marked_edges: self._color_edge(edge) @@ -1379,7 +1379,7 @@ c2.tag_lower(t2) t3=c3.create_line(x, 0, x, BOTTOM) c3.tag_lower(t3) - t4=c3.create_text(x+2, 0, text=`i`, anchor='nw', + t4=c3.create_text(x+2, 0, text=repr(i), anchor='nw', font=self._font) c3.tag_lower(t4) #if i % 4 == 0: @@ -1574,8 +1574,8 @@ except: pass def _fire_callbacks(self, event, *args): - if not self._callbacks.has_key(event): return - for cb_func in self._callbacks[event].keys(): cb_func(*args) + if event not in self._callbacks: return + for cb_func in list(self._callbacks[event].keys()): cb_func(*args) ####################################################################### # Pseudo Earley Rule @@ -1659,14 +1659,14 @@ self._root = None try: # Create the root window. - self._root = Tkinter.Tk() + self._root = tkinter.Tk() self._root.title(title) self._root.bind('', self.destroy) # Set up some frames. - frame3 = Tkinter.Frame(self._root) - frame2 = Tkinter.Frame(self._root) - frame1 = Tkinter.Frame(self._root) + frame3 = tkinter.Frame(self._root) + frame2 = tkinter.Frame(self._root) + frame1 = tkinter.Frame(self._root) frame3.pack(side='bottom', fill='none') frame2.pack(side='bottom', fill='x') frame1.pack(side='bottom', fill='both', expand=1) @@ -1687,7 +1687,7 @@ self.reset() except: - print 'Error creating Tree View' + print('Error creating Tree View') self.destroy() raise @@ -1725,25 +1725,25 @@ def _init_fonts(self, root): # See: - self._sysfont = tkFont.Font(font=Tkinter.Button()["font"]) + self._sysfont = tkinter.font.Font(font=tkinter.Button()["font"]) root.option_add("*Font", self._sysfont) # TWhat's our font size (default=same as sysfont) - self._size = Tkinter.IntVar(root) + self._size = tkinter.IntVar(root) self._size.set(self._sysfont.cget('size')) - self._boldfont = tkFont.Font(family='helvetica', weight='bold', + self._boldfont = tkinter.font.Font(family='helvetica', weight='bold', size=self._size.get()) - self._font = tkFont.Font(family='helvetica', + self._font = tkinter.font.Font(family='helvetica', size=self._size.get()) def _init_animation(self): # Are we stepping? (default=yes) - self._step = Tkinter.IntVar(self._root) + self._step = tkinter.IntVar(self._root) self._step.set(1) # What's our animation speed (default=fast) - self._animate = Tkinter.IntVar(self._root) + self._animate = tkinter.IntVar(self._root) self._animate.set(3) # Default speed = fast # Are we currently animating? @@ -1757,60 +1757,60 @@ def _init_rulelabel(self, parent): ruletxt = 'Last edge generated by:' - self._rulelabel1 = Tkinter.Label(parent,text=ruletxt, + self._rulelabel1 = tkinter.Label(parent,text=ruletxt, font=self._boldfont) - self._rulelabel2 = Tkinter.Label(parent, width=40, + self._rulelabel2 = tkinter.Label(parent, width=40, relief='groove', anchor='w', font=self._boldfont) self._rulelabel1.pack(side='left') self._rulelabel2.pack(side='left') - step = Tkinter.Checkbutton(parent, variable=self._step, + step = tkinter.Checkbutton(parent, variable=self._step, text='Step') step.pack(side='right') def _init_buttons(self, parent): - frame1 = Tkinter.Frame(parent) - frame2 = Tkinter.Frame(parent) + frame1 = tkinter.Frame(parent) + frame2 = tkinter.Frame(parent) frame1.pack(side='bottom', fill='x') frame2.pack(side='top', fill='none') - Tkinter.Button(frame1, text='Reset\nParser', + tkinter.Button(frame1, text='Reset\nParser', background='#90c0d0', foreground='black', command=self.reset).pack(side='right') #Tkinter.Button(frame1, text='Pause', # background='#90c0d0', foreground='black', # command=self.pause).pack(side='left') - Tkinter.Button(frame1, text='Top Down\nStrategy', + tkinter.Button(frame1, text='Top Down\nStrategy', background='#90c0d0', foreground='black', command=self.top_down_strategy).pack(side='left') - Tkinter.Button(frame1, text='Bottom Up\nStrategy', + tkinter.Button(frame1, text='Bottom Up\nStrategy', background='#90c0d0', foreground='black', command=self.bottom_up_strategy).pack(side='left') - Tkinter.Button(frame1, text='Earley\nAlgorithm', + tkinter.Button(frame1, text='Earley\nAlgorithm', background='#90c0d0', foreground='black', command=self.earley_algorithm).pack(side='left') - Tkinter.Button(frame2, text='Top Down Init\nRule', + tkinter.Button(frame2, text='Top Down Init\nRule', background='#90f090', foreground='black', command=self.top_down_init).pack(side='left') - Tkinter.Button(frame2, text='Top Down Expand\nRule', + tkinter.Button(frame2, text='Top Down Expand\nRule', background='#90f090', foreground='black', command=self.top_down_expand).pack(side='left') - Tkinter.Button(frame2, text='Top Down Match\nRule', + tkinter.Button(frame2, text='Top Down Match\nRule', background='#90f090', foreground='black', command=self.top_down_match).pack(side='left') - Tkinter.Frame(frame2, width=20).pack(side='left') - - Tkinter.Button(frame2, text='Bottom Up Init\nRule', + tkinter.Frame(frame2, width=20).pack(side='left') + + tkinter.Button(frame2, text='Bottom Up Init\nRule', background='#90f090', foreground='black', command=self.bottom_up_init).pack(side='left') - Tkinter.Button(frame2, text='Bottom Up Predict\nRule', + tkinter.Button(frame2, text='Bottom Up Predict\nRule', background='#90f090', foreground='black', command=self.bottom_up).pack(side='left') - Tkinter.Frame(frame2, width=20).pack(side='left') - - Tkinter.Button(frame2, text='Fundamental\nRule', + tkinter.Frame(frame2, width=20).pack(side='left') + + tkinter.Button(frame2, text='Fundamental\nRule', background='#90f090', foreground='black', command=self.fundamental).pack(side='left') @@ -1844,9 +1844,9 @@ self._root.bind('s', lambda e,s=self._step:s.set(not s.get())) def _init_menubar(self): - menubar = Tkinter.Menu(self._root) - - filemenu = Tkinter.Menu(menubar, tearoff=0) + menubar = tkinter.Menu(self._root) + + filemenu = tkinter.Menu(menubar, tearoff=0) filemenu.add_command(label='Save Chart', underline=0, command=self.save_chart, accelerator='Ctrl-s') filemenu.add_command(label='Load Chart', underline=0, @@ -1863,7 +1863,7 @@ command=self.destroy, accelerator='Ctrl-x') menubar.add_cascade(label='File', underline=0, menu=filemenu) - editmenu = Tkinter.Menu(menubar, tearoff=0) + editmenu = tkinter.Menu(menubar, tearoff=0) editmenu.add_command(label='Edit Grammar', underline=5, command=self.edit_grammar, accelerator='Ctrl-g') @@ -1872,14 +1872,14 @@ accelerator='Ctrl-t') menubar.add_cascade(label='Edit', underline=0, menu=editmenu) - viewmenu = Tkinter.Menu(menubar, tearoff=0) + viewmenu = tkinter.Menu(menubar, tearoff=0) viewmenu.add_command(label='Chart Matrix', underline=6, command=self.view_matrix) viewmenu.add_command(label='Results', underline=0, command=self.view_results) menubar.add_cascade(label='View', underline=0, menu=viewmenu) - rulemenu = Tkinter.Menu(menubar, tearoff=0) + rulemenu = tkinter.Menu(menubar, tearoff=0) rulemenu.add_command(label='Top Down Strategy', underline=0, command=self.top_down_strategy, accelerator='t') @@ -1904,7 +1904,7 @@ command=self.fundamental) menubar.add_cascade(label='Apply', underline=0, menu=rulemenu) - animatemenu = Tkinter.Menu(menubar, tearoff=0) + animatemenu = tkinter.Menu(menubar, tearoff=0) animatemenu.add_checkbutton(label="Step", underline=0, variable=self._step, accelerator='s') @@ -1922,7 +1922,7 @@ accelerator='+') menubar.add_cascade(label="Animate", underline=1, menu=animatemenu) - zoommenu = Tkinter.Menu(menubar, tearoff=0) + zoommenu = tkinter.Menu(menubar, tearoff=0) zoommenu.add_radiobutton(label='Tiny', variable=self._size, underline=0, value=10, command=self.resize) zoommenu.add_radiobutton(label='Small', variable=self._size, @@ -1935,7 +1935,7 @@ underline=0, value=24, command=self.resize) menubar.add_cascade(label='Zoom', underline=0, menu=zoommenu) - helpmenu = Tkinter.Menu(menubar, tearoff=0) + helpmenu = tkinter.Menu(menubar, tearoff=0) helpmenu.add_command(label='About', underline=0, command=self.about) helpmenu.add_command(label='Instructions', underline=0, @@ -2010,7 +2010,7 @@ def about(self, *e): ABOUT = ("NLTK Chart Parser Demo\n"+ "Written by Edward Loper") - tkMessageBox.showinfo('About: Chart Parser Demo', ABOUT) + tkinter.messagebox.showinfo('About: Chart Parser Demo', ABOUT) #//////////////////////////////////////////////////////////// # File Menu @@ -2035,9 +2035,9 @@ if self._matrix: self._matrix.deselect_cell() if self._results: self._results.set_chart(chart) self._cp.set_chart(chart) - except Exception, e: + except Exception as e: raise - tkMessageBox.showerror('Error Loading Chart', + tkinter.messagebox.showerror('Error Loading Chart', 'Unable to open file: %r' % filename) def save_chart(self, *args): @@ -2047,9 +2047,9 @@ if not filename: return try: pickle.dump(self._chart, open(filename, 'w')) - except Exception, e: + except Exception as e: raise - tkMessageBox.showerror('Error Saving Chart', + tkinter.messagebox.showerror('Error Saving Chart', 'Unable to open file: %r' % filename) def load_grammar(self, *args): @@ -2063,8 +2063,8 @@ else: grammar = cfg.parse_grammar(open(filename, 'r').read()) self.set_grammar(grammar) - except Exception, e: - tkMessageBox.showerror('Error Loading Grammar', + except Exception as e: + tkinter.messagebox.showerror('Error Loading Grammar', 'Unable to open file: %r' % filename) def save_grammar(self, *args): @@ -2082,8 +2082,8 @@ for prod in start: file.write('%s\n' % prod) for prod in rest: file.write('%s\n' % prod) file.close() - except Exception, e: - tkMessageBox.showerror('Error Saving Grammar', + except Exception as e: + tkinter.messagebox.showerror('Error Saving Grammar', 'Unable to open file: %r' % filename) def reset(self, *args): @@ -2209,7 +2209,7 @@ self._root.after(20, self._animate_strategy) def _apply_strategy(self): - RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/drawchart.py new_edge = self._cpstep.next() + new_edge = next(self._cpstep) if new_edge is not None: self._show_new_edge(new_edge) @@ -2281,12 +2281,12 @@ sent = 'John ate the cake on the table' tokens = list(tokenize.whitespace(sent)) - print 'grammar= (' + print('grammar= (') for rule in grammar.productions(): - print ' ', repr(rule)+',' - print ')' - print 'tokens = %r' % tokens - print 'Calling "ChartDemo(grammar, tokens)"...' + print(' ', repr(rule)+',') + print(')') + print('tokens = %r' % tokens) + print('Calling "ChartDemo(grammar, tokens)"...') ChartDemo(grammar, tokens).mainloop() if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/tagging/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/treeview.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/treeview.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/treeview.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/treeview.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/treeview.py (refactored) @@ -1,4 +1,4 @@ -import Tkinter +import tkinter from nltk.draw import TreeWidget from nltk.draw import CanvasFrame @@ -7,32 +7,32 @@ class TreeView: def __init__(self, trees, root=None): if len(trees) == 0: - print "No trees to display." + print("No trees to display.") return newroot = False if root is None: - root = Tkinter.Tk() + root = tkinter.Tk() window = root newroot = True else: - window = Tkinter.Toplevel(root) + window = tkinter.Toplevel(root) window.title("Parse Tree") window.geometry("600x400") self.cf = CanvasFrame(window) self.cf.pack(side='top', expand=1, fill='both') - buttons = Tkinter.Frame(window) + buttons = tkinter.Frame(window) buttons.pack(side='bottom', fill='x') - self.spin = Tkinter.Spinbox(buttons, from_=1, to=len(trees), + self.spin = tkinter.Spinbox(buttons, from_=1, to=len(trees), command=self.showtree, width=3) if len(trees) > 1: self.spin.pack(side='left') - self.label = Tkinter.Label(buttons, text="of %d" % len(trees)) + self.label = tkinter.Label(buttons, text="of %d" % len(trees)) if len(trees) > 1: self.label.pack(side='left') - self.done = Tkinter.Button(buttons, text="Done", command=window.destroy) + self.done = tkinter.Button(buttons, text="Done", command=window.destroy) self.done.pack(side='right') - self.printps = Tkinter.Button(buttons, text="Print to Postscript", command=self.cf.print_to_file) + self.printps = tkinter.Button(buttons, text="Print to Postscript", command=self.cf.print_to_file) self.printps.pack(side='right') self.trees = trees + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/tree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/testw.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/testw.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/testw.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/testw.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/testw.py (refactored) @@ -1,14 +1,14 @@ -from featurechart import * -from treeview import * +from .featurechart import * +from .treeview import * def demo(): cp = load_earley('lab3-slash.cfg', trace=1) trees = cp.parse('Mary walks') for tree in trees: - print tree + print(tree) sem = tree[0].node['sem'] - print sem - print sem.skolemise().clauses() + print(sem) + print(sem.skolemise().clauses()) return sem.skolemise().clauses() #run_profile() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/test.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/test.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/test.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/test.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/test.py (refactored) @@ -1,14 +1,14 @@ -from featurechart import * -from treeview import * +from .featurechart import * +from .treeview import * def demo(): cp = load_earley('lab3-slash.cfg', trace=0) trees = cp.parse('Mary sees a dog in Noosa') for tree in trees: - print tree + print(tree) sem = tree[0].node['sem'] - print sem - print sem.skolemise().clauses() + print(sem) + print(sem.skolemise().clauses()) return sem.skolemise().clauses() #run_profile() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/logic.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/logic.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/logic.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/logic.py (refactored) @@ -1,7 +1,7 @@ # Natural Language Toolkit: Logic from nltk.utilities import Counter -from featurelite import SubstituteBindingsMixin, FeatureI -from featurelite import Variable as FeatureVariable +from .featurelite import SubstituteBindingsMixin, FeatureI +from .featurelite import Variable as FeatureVariable _counter = Counter() def unique_variable(counter=None): @@ -137,7 +137,7 @@ raise NotImplementedError def __hash__(self): - raise NotImplementedError, self.__class__ + raise NotImplementedError(self.__class__) def normalize(self): if hasattr(self, '_normalized'): return self._normalized @@ -612,7 +612,7 @@ @returns: a parsed Expression """ self.feed(data) - result = self.next() + result = next(self) return result def process(self): @@ -629,7 +629,7 @@ whether the token will be removed from the buffer; setting it to 0 gives lookahead capability.""" if self.buffer == '': - raise Error, "end of stream" + raise Error("end of stream") tok = None buffer = self.buffer while not tok: @@ -654,7 +654,7 @@ TOKENS.extend(Parser.BOOL) return token not in TOKENS - def next(self): + def __next__(self): """Parse the next complete expression from the stream and return it.""" tok = self.token() @@ -678,8 +678,8 @@ tok = self.token() if tok != Parser.DOT: - raise Error, "parse error, unexpected token: %s" % tok - term = self.next() + raise Error("parse error, unexpected token: %s" % tok) + term = next(self) accum = factory(Variable(vars.pop()), term) while vars: accum = factory(Variable(vars.pop()), accum) @@ -687,12 +687,12 @@ elif tok == Parser.OPEN: # Expression is an application expression: (M N) - first = self.next() - second = self.next() + first = next(self) + second = next(self) exps = [] while self.token(0) != Parser.CLOSE: # Support expressions like: (M N P) == ((M N) P) - exps.append(self.next()) + exps.append(next(self)) tok = self.token() # swallow the close token assert tok == Parser.CLOSE if isinstance(second, Operator): @@ -721,7 +721,7 @@ # Expression is a simple variable expression: x return VariableExpression(Variable(tok)) else: - raise Error, "parse error, unexpected token: %s" % tok + raise Error("parse error, unexpected token: %s" % tok) # This is intended to be overridden, so that you can derive a Parser class # that constructs expressions using your subclasses. So far we only need @@ -762,7 +762,7 @@ ApplicationExpression(XZ, Y)))) O = LambdaExpression(x, LambdaExpression(y, XY)) N = ApplicationExpression(LambdaExpression(x, XA), I) - T = Parser('\\x y.(x y z)').next() + T = next(Parser('\\x y.(x y z)')) return [X, XZ, XYZ, I, K, L, S, B, C, O, N, T] def demo(): @@ -771,21 +771,21 @@ P = VariableExpression(p) Q = VariableExpression(q) for l in expressions(): - print "Expression:", l - print "Variables:", l.variables() - print "Free:", l.free() - print "Subterms:", l.subterms() - print "Simplify:",l.simplify() + print("Expression:", l) + print("Variables:", l.variables()) + print("Free:", l.free()) + print("Subterms:", l.subterms()) + print("Simplify:",l.simplify()) la = ApplicationExpression(ApplicationExpression(l, P), Q) las = la.simplify() - print "Apply and sRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/logic.py implify: %s -> %s" % (la, las) - ll = Parser(str(l)).next() - print 'l is:', l - print 'll is:', ll + print("Apply and simplify: %s -> %s" % (la, las)) + ll = next(Parser(str(l))) + print('l is:', l) + print('ll is:', ll) assert l.equals(ll) - print "Serialize and reparse: %s -> %s" % (l, ll) - print "Variables:", ll.variables() - print "Normalize: %s" % ll.normalize() + print("Serialize and reparse: %s -> %s" % (l, ll)) + print("Variables:", ll.variables()) + print("Normalize: %s" % ll.normalize()) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/interact.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/interact.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/interact.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/interact.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/interact.py (refactored) @@ -1,5 +1,5 @@ -from featurechart import * -from logic import Counter +from .featurechart import * +from .logic import Counter import sys def interact(grammar_filename, trace=2): @@ -14,10 +14,10 @@ # Read a line and parse it. trees = cp.parse(line) if len(trees) == 0: - print "I don't understand." + print("I don't understand.") continue elif len(trees) > 1: - print "That was ambiguous, but I'll guess at what you meant." + print("That was ambiguous, but I'll guess at what you meant.") # Extract semantic information from the parse tree. tree = trees[0] @@ -36,13 +36,13 @@ skolem = skolem.replace_unique(var, counter) if trace > 0: - print tree - print 'Semantic value:', skolem + print(tree) + print('Semantic value:', skolem) clauses = skolem.clauses() if trace > 1: - print "Got these clauses:" + print("Got these clauses:") for clause in clauses: - print '\t', clause + print('\t', clause) if pos == 'S': # Handle statements @@ -68,11 +68,11 @@ if success: # answer answer = bindings.get('wh', 'Yes.') - print answer['variable']['name'] + print(answer['variable']['name']) else: # This is an open world without negation, so negative answers # aren't possible. - print "I don't know." + print("I don't know.") def demo(): interact('lab3-slash.cfg', trace=2) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurelite.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurelite.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurelite.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurelite.py (refactored) @@ -91,7 +91,7 @@ class FeatureI(object): def __init__(self): - raise TypeError, "FeatureI is an abstract interface" + raise TypeError("FeatureI is an abstract interface") class _FORWARD(object): """ @@ -102,7 +102,7 @@ instantiated. """ def __init__(self): - raise TypeError, "The _FORWARD class is not meant to be instantiated" + raise TypeError("The _FORWARD class is not meant to be instantiated") class Variable(object): """ @@ -260,7 +260,7 @@ # discard Variables which don't look like FeatureVariables if varstr.startswith('?'): var = makevar(varstr) - if bindings.has_key(var.name()): + if var.name() in bindings: newval = newval.replace(semvar, bindings[var.name()]) return newval @@ -278,13 +278,13 @@ if isMapping(obj): return obj dict = {} dict['__class__'] = obj.__class__.__name__ - for (key, value) in obj.__dict__.items(): + for (key, value) in list(obj.__dict__.items()): dict[key] = object_to_features(value) return dict def variable_representer(dumper, var): "Output variables in YAML as ?name." - return dumper.represent_scalar(u'!var', u'?%s' % var.name()) + return dumper.represent_scalar('!var', '?%s' % var.name()) yaml.add_representer(Variable, variable_representer) def variable_constructor(loader, node): @@ -292,8 +292,8 @@ value = loader.construct_scalar(node) name = value[1:] return Variable(name) -yaml.add_constructor(u'!var', variable_constructor) -yaml.add_implicit_resolver(u'!var', re.compile(r'^\?\w+$')) +yaml.add_constructor('!var', variable_constructor) +yaml.add_implicit_resolver('!var', re.compile(r'^\?\w+$')) def _copy_and_bind(feature, bindings, memo=None): """ @@ -305,14 +305,14 @@ if memo is None: memo = {} if id(feature) in memo: return memo[id(feature)] if isinstance(feature, Variable) and bindings is not None: - if not bindings.has_key(feature.name()): + if feature.name() not in bindings: bindings[feature.name()] = feature.copy() result = _copy_and_bind(bindings[feature.name()], None, memo) else: if isMapping(feature): # Construct a new object of the same class result = feature.__class__() - for (key, value) in feature.items(): + for (key, value) in list(feature.items()): result[key] = _copy_and_bind(value, bindings, memo) elif isinstance(feature, SubstituteBindingsI): if bindings is not None: @@ -629,19 +629,19 @@ if memo is None: memo = {} copymemo = {} - if memo.has_key((id(feature1), id(feature2))): + if (id(feature1), id(feature2)) in memo: result = memo[id(feature1), id(feature2)] if result is UnificationFailure: if trace > 2: - print '(cached) Unifying: %r + %r --> [fail]' % (feature1, feature2) + print('(cached) Unifying: %r + %r --> [fail]' % (feature1, feature2)) raise result() if trace > 2: - print '(cached) Unifying: %r + %r --> ' % (feature1, feature2), - print repr(result) + print('(cached) Unifying: %r + %r --> ' % (feature1, feature2), end=' ') + print(repr(result)) return result if trace > 1: - print 'Unifying: %r + %r --> ' % (feature1, feature2), + print('Unifying: %r + %r --> ' % (feature1, feature2), end=' ') # Make copies of the two structures (since the unification algorithm is # destructive). Use the same memo, to preserve reentrance links between @@ -650,7 +650,7 @@ copy2 = _copy_and_bind(feature2, bindings2, copymemo) # Preserve links between bound variables and the two feature structures. for b in (bRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurelite.py indings1, bindings2): - for (vname, value) in b.items(): + for (vname, value) in list(b.items()): value_id = id(value) if value_id in copymemo: b[vname] = copymemo[value_id] @@ -660,7 +660,7 @@ unified = _destructively_unify(copy1, copy2, bindings1, bindings2, memo, fail) except UnificationFailure: - if trace > 1: print '[fail]' + if trace > 1: print('[fail]') memo[id(feature1), id(feature2)] = UnificationFailure raise @@ -672,9 +672,9 @@ _lookup_values(bindings2, {}, remove=True) if trace > 1: - print repr(unified) + print(repr(unified)) elif trace > 0: - print 'Unifying: %r + %r --> %r' % (feature1, feature2, repr(unified)) + print('Unifying: %r + %r --> %r' % (feature1, feature2, repr(unified))) memo[id(feature1), id(feature2)] = unified return unified @@ -690,11 +690,11 @@ and C{other} are undefined. """ if depth > 50: - print "Infinite recursion in this unification:" - print show(dict(feature1=feature1, feature2=feature2, - bindings1=bindings1, bindings2=bindings2, memo=memo)) - raise ValueError, "Infinite recursion in unification" - if memo.has_key((id(feature1), id(feature2))): + print("Infinite recursion in this unification:") + print(show(dict(feature1=feature1, feature2=feature2, + bindings1=bindings1, bindings2=bindings2, memo=memo))) + raise ValueError("Infinite recursion in unification") + if (id(feature1), id(feature2)) in memo: result = memo[id(feature1), id(feature2)] if result is UnificationFailure: raise result() unified = _do_unify(feature1, feature2, bindings1, bindings2, memo, fail, @@ -737,9 +737,9 @@ # At this point, we know they're both mappings. # Do the destructive part of unification. - while feature2.has_key(_FORWARD): feature2 = feature2[_FORWARD] + while _FORWARD in feature2: feature2 = feature2[_FORWARD] if feature1 is not feature2: feature2[_FORWARD] = feature1 - for (fname, val2) in feature2.items(): + for (fname, val2) in list(feature2.items()): if fname == _FORWARD: continue val1 = feature1.get(fname) feature1[fname] = _destructively_unify(val1, val2, bindings1, @@ -752,12 +752,12 @@ the target of its forward pointer (to preserve reentrance). """ if not isMapping(feature): return - if visited.has_key(id(feature)): return + if id(feature) in visited: return visited[id(feature)] = True - for fname, fval in feature.items(): + for fname, fval in list(feature.items()): if isMapping(fval): - while fval.has_key(_FORWARD): + while _FORWARD in fval: fval = fval[_FORWARD] feature[fname] = fval _apply_forwards(fval, visited) @@ -789,10 +789,10 @@ else: return var.forwarded_self() if not isMapping(mapping): return mapping - if visited.has_key(id(mapping)): return mapping + if id(mapping) in visited: return mapping visited[id(mapping)] = True - for fname, fval in mapping.items(): + for fname, fval in list(mapping.items()): if isMapping(fval): _lookup_values(fval, visited) elif isinstance(fval, Variable): @@ -813,9 +813,9 @@ Replace any feature structures that have been forwarded by their new identities. """ - for (key, value) in bindings.items(): - if isMapping(value) and value.has_key(_FORWARD): - while value.has_key(_FORWARD): + for (key, value) in list(bindings.items()): + if isMapping(value) and _FORWARD in value: + while _FORWARD in value: value = value[_FORWARD] bindings[key] = value + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurechart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurechart.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurechart.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurechart.py (refactored) @@ -13,11 +13,11 @@ """ import yaml -from chart import * -from category import * -import cfg - -from featurelite import * +from .chart import * +from .category import * +from . import cfg + +from .featurelite import * def load_earley(filename, trace=1): """ @@ -125,7 +125,7 @@ @return: the value of the right-hand side with variables set. @rtype: C{Category} """ - return tuple(apply(x, self._vars) for x in TreeEdge.rhs(self)) + return tuple(x(*self._vars) for x in TreeEdge.rhs(self)) def orig_rhs(self): """ @@ -160,7 +160,7 @@ left_bindings = left_edge.vars().copy() right_bindings = right_edge.vars().copy() try: - unified = unify(left_edge.next(), right_edge.lhs(), left_bindings, + unified = unify(next(left_edge), right_edge.lhs(), left_bindings, right_bindings, memo=self.unify_memo, trace=self.trace-2) if isinstance(unified, Category): unified.freeze() except UnificationFailure: return @@ -213,7 +213,7 @@ for prod in grammar.productions(): bindings = edge.vars().copy() try: - unified = unify(edge.next(), prod.lhs(), bindings, {}, + unified = unify(next(edge), prod.lhs(), bindings, {}, memo=self.unify_memo, trace=self.trace-2) if isinstance(unified, Category): unified.freeze() except UnificationFailure: @@ -258,7 +258,7 @@ # Width, for printing trace edges. #w = 40/(chart.num_leaves()+1) w = 2 - if self._trace > 0: print ' '*9, chart.pp_leaves(w) + if self._trace > 0: print(' '*9, chart.pp_leaves(w)) # Initialize the chart with a special "starter" edge. root = GrammarCategory(pos='[INIT]') @@ -272,7 +272,7 @@ #scanner = FeatureScannerRule(self._lexicon) for end in range(chart.num_leaves()+1): - if self._trace > 1: print 'Processing queue %d' % end + if self._trace > 1: print('Processing queue %d' % end) # Scanner rule substitute, i.e. this is being used in place # of a proper FeatureScannerRule at the moment. @@ -285,14 +285,14 @@ {}) chart.insert(new_pos_edge, (new_leaf_edge,)) if self._trace > 0: - print 'Scanner ', chart.pp_edge(new_pos_edge,w) + print('Scanner ', chart.pp_edge(new_pos_edge,w)) for edge in chart.select(end=end): if edge.is_incomplete(): for e in predictor.apply(chart, grammar, edge): if self._trace > 1: - print 'Predictor', chart.pp_edge(e,w) + print('Predictor', chart.pp_edge(e,w)) #if edge.is_incomplete(): # for e in scanner.apply(chart, grammar, edge): # if self._trace > 0: @@ -300,7 +300,7 @@ if edge.is_complete(): for e in completer.apply(chart, grammar, edge): if self._trace > 0: - print 'Completer', chart.pp_edge(e,w) + print('Completer', chart.pp_edge(e,w)) # Output a list of complete parses. return chart.parses(root) @@ -348,14 +348,14 @@ return earley_lexicon.get(word.upper(), []) sent = 'I saw John with a dog with my cookie' - print "Sentence:\n", sent + print("Sentence:\n", sent) from nltk import tokenize tokens = list(tokenize.whitespace(sent)) t = time.time() cp = FeatureEarleyChartParse(earley_grammar, lexicon, trace=1) trees = cp.get_parse_list(tokens) - print "Time: %s" % (time.time() - t) - RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/featurechart.py for tree in trees: print tree + print("Time: %s" % (time.time() - t)) + for tree in trees: print(tree) def run_profile(): import profile + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/chart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/chart.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/chart.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/chart.py (refactored) @@ -9,7 +9,7 @@ # # $Id: chart.py 4157 2007-02-28 09:56:25Z stevenbird $ -from __init__ import * +from .__init__ import * from nltk import cfg, Tree """ @@ -162,7 +162,7 @@ """ raise AssertionError('EdgeI is an abstract interface') - def next(self): + def __next__(self): """ @return: The element of this edge's right-hand side that immediately follows its dot. @@ -271,7 +271,7 @@ def dot(self): return self._dot def is_complete(self): return self._dot == len(self._rhs) def is_incomplete(self): return self._dot != len(self._rhs) - def next(self): + def __next__(self): if self._dot >= len(self._rhs): return None else: return self._rhs[self._dot] @@ -334,7 +334,7 @@ def dot(self): return 0 def is_complete(self): return True def is_incomplete(self): return False - def next(self): return None + def __next__(self): return None # Comparisons & hashing def __cmp__(self, other): @@ -487,12 +487,12 @@ if restrictions=={}: return iter(self._edges) # Find the index corresponding to the given restrictions. - restr_keys = restrictions.keys() + restr_keys = list(restrictions.keys()) restr_keys.sort() restr_keys = tuple(restr_keys) # If it doesn't exist, then create it. - if not self._indexes.has_key(restr_keys): + if restr_keys not in self._indexes: self._add_index(restr_keys) vals = [restrictions[k] for k in restr_keys] return iter(self._indexes[restr_keys].get(tuple(vals), [])) @@ -505,7 +505,7 @@ # Make sure it's a valid index. for k in restr_keys: if not hasattr(EdgeI, k): - raise ValueError, 'Bad restriction: %s' % k + raise ValueError('Bad restriction: %s' % k) # Create the index. self._indexes[restr_keys] = {} @@ -537,12 +537,12 @@ C{child_pointer_list} with C{edge}. """ # Is it a new edge? - if not self._edge_to_cpls.has_key(edge): + if edge not in self._edge_to_cpls: # Add it to the list of edges. self._edges.append(edge) # Register with indexes - for (restr_keys, index) in self._indexes.items(): + for (restr_keys, index) in list(self._indexes.items()): vals = [getattr(edge, k)() for k in restr_keys] index = self._indexes[restr_keys] index.setdefault(tuple(vals),[]).append(edge) @@ -551,7 +551,7 @@ cpls = self._edge_to_cpls.setdefault(edge,{}) child_pointer_list = tuple(child_pointer_list) - if cpls.has_key(child_pointer_list): + if child_pointer_list in cpls: # We've already got this CPL; return false. return False else: @@ -601,7 +601,7 @@ than once, we can reuse the same trees. """ # If we've seen this edge before, then reuse our old answer. - if memo.has_key(edge): return memo[edge] + if edge in memo: return memo[edge] trees = [] @@ -677,7 +677,7 @@ been used to form this edge. """ # Make a copy, in case they modify it. - return self._edge_to_cpls.get(edge, {}).keys() + return list(self._edge_to_cpls.get(edge, {}).keys()) #//////////////////////////////////////////////////////////// # Display @@ -839,7 +839,7 @@ @rtype: C{list} of L{EdgeI} @return: A list of the edges that were added. """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') def apply_iter(self, chart, grammar, *edges): """ @@ -854,7 +854,7 @@ that should be passed to C{apply} is specified by the L{NUM_EDGES} class variable. """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') def apply_everywhere(self, chart, grammar): """ @@ -864,7 +864,7 @@ @rtype: C{list} of L{EdgeI} @return: A list of the edges that were added. """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') def apply_everywhere_iter(self, chart, grammar): """ @@ -875,7 +875,7 @@ return. @rtype: C{iter} of L{EdgeI} """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') class AbstractChartRule(object): """ @@ -893,7 +893,7 @@ # Subclasses must define apply_iter. def apply_iter(self, chart, grammar, *edges): - raise AssertionError, 'AbstractChartRule is an abstract class' + raise AssertionError('AbstractChartRule is an abstract class') # Default: loop through the given number of edges, and call # self.apply() for each set of edges. @@ -921,7 +921,7 @@ yield new_edge else: - raise AssertionError, 'NUM_EDGES>3 is not currently supported' + raise AssertionError('NUM_EDGES>3 is not currently supported') # Default: delegate to apply_iter. def apply(self, chart, grammar, *edges): @@ -953,7 +953,7 @@ def apply_iter(self, chart, grammar, left_edge, right_edge): # Make sure the rule is applicable. if not (left_edge.end() == right_edge.start() and - left_edge.next() == right_edge.lhs() and + next(left_edge) == right_edge.lhs() and left_edge.is_incomplete() and right_edge.is_complete()): return @@ -993,7 +993,7 @@ if edge1.is_incomplete(): # edge1 = left_edge; edge2 = right_edge for edge2 in chart.select(start=edge1.end(), is_complete=True, - lhs=edge1.next()): + lhs=next(edge1)): for new_edge in fr.apply_iter(chart, grammar, edge1, edge2): yield new_edge else: @@ -1052,7 +1052,7 @@ NUM_EDGES = 1 def apply_iter(self, chart, grammar, edge): if edge.is_complete(): return - for prod in grammar.productions(lhs=edge.next()): + for prod in grammar.productions(lhs=next(edge)): new_edge = TreeEdge.from_production(prod, edge.end()) if chart.insert(new_edge, ()): yield new_edge @@ -1071,7 +1071,7 @@ if edge.is_complete() or edge.end() >= chart.num_leaves(): return index = edge.end() leaf = chart.leaf(index) - if edge.next() == leaf: + if next(edge) == leaf: new_edge = LeafEdge(leaf, index) if chart.insert(new_edge, ()): yield new_edge @@ -1119,7 +1119,7 @@ # If we've already applied this rule to an edge with the same # next & end, and the chart & grammar have not changed, then # just return (no new edges to add). - done = self._done.get((edge.next(), edge.end()), (None,None)) + done = self._done.get((next(edge), edge.end()), (None,None)) if done[0] is chart and done[1] is grammar: return # Add all the edges indicated by the top down expand rule. @@ -1127,7 +1127,7 @@ yield e # Record the fact that we've applied this rule. - self._done[edge.next(), edge.end()] = (chart, grammar) + self._done[next(edge), edge.end()] = (chart, grammar) def __str__(self): return 'Top Down Expand Rule' @@ -1219,11 +1219,11 @@ if edge.is_complete() or edge.end()>=chart.num_leaves(): return index = edge.end() leaf = chart.leaf(index) - if edge.next() in self._word_to_pos.get(leaf, []): + if next(edge) in self._word_to_pos.get(leaf, []): new_leaf_edge = LeafEdge(leaf, index) if chart.insert(new_leaf_edge, ()): yield new_leaf_edge - new_pos_edge = TreeEdge((index,index+1), edge.next(), + new_pos_edge = TreeEdge((index,index+1), next(edge), [leaf], 1) if chart.insert(new_pos_edge, (new_leaf_edge,)): yield new_pos_edge @@ -1284,7 +1284,7 @@ # Width, for printing trace edges. w = 50/(chart.num_leaves()+1) - if self._trace > 0: print ' ', chart.pp_leaves(w) + if self._trace > 0: print(' ', chart.pp_leaves(w)) # Initialize the chart with a special "starter" edge. root = cfg.Nonterminal('[INIT]') @@ -1297,20 +1297,20 @@ scanner = ScannerRule(self._lexicon) for end in range(chart.num_leaves()+1): - if self._trace > 1: print 'Processing queue %d' % end + if self._trace > 1: print('Processing queue %d' % end) for edge in chart.select(end=end): if edge.is_incomplete(): for e in predictor.apply(chart, grammar, edge): if self._trace > 0: - print 'Predictor', chart.pp_edge(e,w) + print('Predictor', chart.pp_edge(e,w)) if edge.is_incomplete(): for e in scanner.apply(chart, grammar, edge): if self._trace > 0: - print 'Scanner ', chart.pp_edge(e,w) + print('Scanner ', chart.pp_edge(e,w)) if edge.is_complete(): for e in completer.apply(chart, grammar, edge): if self._trace > 0: - print 'Completer', chart.pp_edge(e,w) + print('Completer', chart.pp_edge(e,w)) # Output a list of complete parses. return chart.parses(grammar.start(), tree_class=tree_class) @@ -1363,7 +1363,7 @@ # Width, for printing trace edges. w = 50/(chart.num_leaves()+1) - if self._trace > 0: print chart.pp_leaves(w) + if self._trace > 0: print(chart.pp_leaves(w)) edges_added = 1 while edges_added > 0: @@ -1372,11 +1372,11 @@ edges_added_by_rule = 0 for e in rule.apply_everywhere(chart, grammar): if self._trace > 0 and edges_added_by_rule == 0: - print '%s:' % rule + print('%s:' % rule) edges_added_by_rule += 1 - if self._trace > 1: print chart.pp_edge(e,w) + if self._trace > 1: print(chart.pp_edge(e,w)) if self._trace == 1 and edges_added_by_rule > 0: - print ' - Added %d edges' % edges_added_by_rule + print(' - Added %d edges' % edges_added_by_rule) edges_added += edges_added_by_rule # Return a list of complete parses. @@ -1438,14 +1438,14 @@ added with the current strategy and grammar. """ if self._chart is None: - raise ValueError, 'Parser must be initialized first' + raise ValueError('Parser must be initialized first') while 1: self._restart = False w = 50/(self._chart.num_leaves()+1) for e in self._parse(): - if self._trace > 1: print self._current_chartrule - if self._trace > 0: print self._chart.pp_edge(e,w) + if self._trace > 1: print(self._current_chartrule) + if self._trace > 0: print(self._chart.pp_edge(e,w)) yield e if self._restart: break else: @@ -1579,23 +1579,23 @@ # Tokenize a sample sentence. sent = 'I saw John with a dog with my cookie' - print "SentenRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/chart.py ce:\n", sent + print("Sentence:\n", sent) from nltk import tokenize tokens = list(tokenize.whitespace(sent)) - print tokens + print(tokens) # Ask the user which parser to test - print ' 1: Top-down chart parser' - print ' 2: Bottom-up chart parser' - print ' 3: Earley parser' - print ' 4: Stepping chart parser (alternating top-down & bottom-up)' - print ' 5: All parsers' - print '\nWhich parser (1-5)? ', + print(' 1: Top-down chart parser') + print(' 2: Bottom-up chart parser') + print(' 3: Earley parser') + print(' 4: Stepping chart parser (alternating top-down & bottom-up)') + print(' 5: All parsers') + print('\nWhich parser (1-5)? ', end=' ') choice = sys.stdin.readline().strip() - print + print() if choice not in '12345': - print 'Bad parser number' + print('Bad parser number') return # Keep track of how long each parser takes. @@ -1608,7 +1608,7 @@ parses = cp.get_parse_list(tokens) times['top down'] = time.time()-t assert len(parses)==5, 'Not all parses found' - for tree in parses: print tree + for tree in parses: print(tree) # Run the bottom-up parser, if requested. if choice in ('2', '5'): @@ -1617,7 +1617,7 @@ parses = cp.get_parse_list(tokens) times['bottom up'] = time.time()-t assert len(parses)==5, 'Not all parses found' - for tree in parses: print tree + for tree in parses: print(tree) # Run the earley, if requested. if choice in ('3', '5'): @@ -1626,7 +1626,7 @@ parses = cp.get_parse_list(tokens) times['Earley parser'] = time.time()-t assert len(parses)==5, 'Not all parses found' - for tree in parses: print tree + for tree in parses: print(tree) # Run the stepping parser, if requested. if choice in ('4', '5'): @@ -1634,24 +1634,24 @@ cp = SteppingChartParse(grammar, trace=1) cp.initialize(tokens) for i in range(5): - print '*** SWITCH TO TOP DOWN' + print('*** SWITCH TO TOP DOWN') cp.set_strategy(TD_STRATEGY) for j, e in enumerate(cp.step()): if j>20 or e is None: break - print '*** SWITCH TO BOTTOM UP' + print('*** SWITCH TO BOTTOM UP') cp.set_strategy(BU_STRATEGY) for j, e in enumerate(cp.step()): if j>20 or e is None: break times['stepping'] = time.time()-t assert len(cp.parses())==5, 'Not all parses found' - for parse in cp.parses(): print parse + for parse in cp.parses(): print(parse) # Print the times of all parsers: - maxlen = max(len(key) for key in times.keys()) - format = '%' + `maxlen` + 's parser: %6.3fsec' - times_items = times.items() + maxlen = max(len(key) for key in list(times.keys())) + format = '%' + repr(maxlen) + 's parser: %6.3fsec' + times_items = list(times.items()) times_items.sort(lambda a,b:cmp(a[1], b[1])) for (parser, t) in times_items: - print format % (parser, t) + print(format % (parser, t)) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/cfg.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/cfg.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/cfg.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/cfg.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/cfg.py (refactored) @@ -226,8 +226,8 @@ @param rhs: The right-hand side of the new C{Production}. @type rhs: sequence of (C{Nonterminal} and (terminal)) """ - if isinstance(rhs, (str, unicode)): - raise TypeError, 'production right hand side should be a list, not a string' + if isinstance(rhs, str): + raise TypeError('production right hand side should be a list, not a string') self._lhs = lhs self._rhs = tuple(rhs) self._hash = hash((self._lhs, self._rhs)) @@ -385,7 +385,7 @@ """ # Use _PARSE_RE to check that it's valid. if not _PARSE_RE.match(s): - raise ValueError, 'Bad production string' + raise ValueError('Bad production string') # Use _SPLIT_RE to process it. pieces = _SPLIT_RE.split(s) pieces = [p for i,p in enumerate(pieces) if i%2==1] @@ -407,9 +407,9 @@ if line.startswith('#') or line=='': continue try: productions += parse_production(line) except ValueError: - raise ValueError, 'Unable to parse line %s' % linenum + raise ValueError('Unable to parse line %s' % linenum) if len(productions) == 0: - raise ValueError, 'No productions found!' + raise ValueError('No productions found!') start = productions[0].lhs() return Grammar(start, productions) @@ -429,11 +429,11 @@ N, V, P, Det = cfg.nonterminals('N, V, P, Det') VP_slash_NP = VP/NP - print 'Some nonterminals:', [S, NP, VP, PP, N, V, P, Det, VP/NP] - print ' S.symbol() =>', `S.symbol()` - print - - print cfg.Production(S, [NP]) + print('Some nonterminals:', [S, NP, VP, PP, N, V, P, Det, VP/NP]) + print(' S.symbol() =>', repr(S.symbol())) + print() + + print(cfg.Production(S, [NP])) # Create some Grammar Productions grammar = cfg.parse_grammar(""" @@ -453,11 +453,11 @@ P -> 'in' """) - print 'A Grammar:', `grammar` - print ' grammar.start() =>', `grammar.start()` - print ' grammar.productions() =>', + print('A Grammar:', repr(grammar)) + print(' grammar.start() =>', repr(grammar.start())) + print(' grammar.productions() =>', end=' ') # Use string.replace(...) is to line-wrap the output. - print `grammar.productions()`.replace(',', ',\n'+' '*25) - print + print(repr(grammar.productions()).replace(',', ',\n'+' '*25)) + print() if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/category.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/category.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/category.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/category.py (refactored) @@ -10,11 +10,11 @@ # # $Id: category.py 4162 2007-03-01 00:46:05Z stevenbird $ -import logic +from . import logic from nltk.cfg import * #from kimmo import kimmo -from featurelite import * +from .featurelite import * from copy import deepcopy import yaml # import nltk.yamltags @@ -123,16 +123,16 @@ self._features[key] = value def items(self): - return self._features.items() + return list(self._features.items()) def keys(self): - return self._features.keys() + return list(self._features.keys()) def values(self): - return self._features.values() + return list(self._features.values()) def has_key(self, key): - return self._features.has_key(key) + return key in self._features def symbol(self): """ @@ -161,7 +161,7 @@ """ @return: a list of all features that have values. """ - return self._features.keys() + return list(self._features.keys()) has_feature = has_key @@ -179,7 +179,7 @@ @staticmethod def _remove_unbound_vars(obj): - for (key, value) in obj.items(): + for (key, value) in list(obj.items()): if isinstance(value, Variable): del obj[key] elif isinstance(value, (Category, dict)): @@ -206,7 +206,7 @@ def _str(cls, obj, reentrances, reentrance_ids, normalize=False): segments = [] - keys = obj.keys() + keys = list(obj.keys()) keys.sort() for fname in keys: if fname == cls.headname: continue @@ -389,14 +389,14 @@ # Semantic value of the form '; return an ApplicationExpression match = _PARSE_RE['application'].match(s, position) if match is not None: - fun = ParserSubstitute(match.group(2)).next() - arg = ParserSubstitute(match.group(3)).next() + fun = next(ParserSubstitute(match.group(2))) + arg = next(ParserSubstitute(match.group(3))) return logic.ApplicationExpressionSubst(fun, arg), match.end() # other semantic value enclosed by '< >'; return value given by the lambda expr parser match = _PARSE_RE['semantics'].match(s, position) if match is not None: - return ParserSubstitute(match.group(1)).next(), match.end() + return next(ParserSubstitute(match.group(1))), match.end() # String value if s[position] in "'\"": @@ -455,11 +455,11 @@ try: lhs, position = cls.inner_parse(s, position) lhs = cls(lhs) - except ValueError, e: + except ValueError as e: estr = ('Error parsing field structure\n\n\t' + s + '\n\t' + ' '*e.args[1] + '^ ' + 'Expected %s\n' % e.args[0]) - raise ValueError, estr + raise ValueError(estr) lhs.freeze() match = _PARSE_RE['arrow'].match(s, position) @@ -473,11 +473,11 @@ try: val, position = cls.inner_parse(s, position, {}) if isinstance(val, dict): val = cls(val) - except ValueError, e: + except ValueError as e: estr = ('Error parsing field structure\n\n\t' + s + '\n\t' + ' '*e.args[1] + '^ ' + 'Expected %s\n' % e.args[0]) - raise ValueError, estr + raise ValueError(estr) if isinstance(val, Category): val.freeze() rhs.append(val) position = _PARSE_RE['whitespace'].match(s, position).end() @@ -519,7 +519,7 @@ def _str(cls, obj, reentrances, reentrance_ids, normalize=False): segments = [] - keys = obj.keys() + keys = list(obj.keys()) RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/category.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/category.py ### RefactoringTool: Line 122: could not convert: raise "Cannot modify a frozen Category" RefactoringTool: Python 3 does not support string exceptions keys.sort() for fname in keys: if fname == cls.headname: continue @@ -576,9 +576,9 @@ if slash_match is not None: position = slash_match.end() slash, position = GrammarCategory._parseval(s, position, reentrances) - if isinstance(slash, basestring): slash = {'pos': slash} + if isinstance(slash, str): slash = {'pos': slash} body['/'] = unify(body.get('/'), slash) - elif not body.has_key('/'): + elif '/' not in body: body['/'] = False return cls(body), position @@ -632,7 +632,7 @@ return lookup def earley_parser(self, trace=1): - from featurechart import FeatureEarleyChartParse + from .featurechart import FeatureEarleyChartParse if self.kimmo is None: lexicon = self.earley_lexicon() else: lexicon = self.kimmo_lexicon() @@ -686,28 +686,28 @@ yaml.add_representer(GrammarCategory, GrammarCategory.to_yaml) def demo(): - print "Category(pos='n', agr=dict(number='pl', gender='f')):" - print - print Category(pos='n', agr=dict(number='pl', gender='f')) - print repr(Category(pos='n', agr=dict(number='pl', gender='f'))) - print - print "GrammarCategory.parse('NP[sem=/NP'):" - print - print GrammarCategory.parse(r'NP[sem=]/NP') - print repr(GrammarCategory.parse(r'NP[sem=]/NP')) - print - print "GrammarCategory.parse('?x/?x'):" - print - print GrammarCategory.parse('?x/?x') - print repr(GrammarCategory.parse('?x/?x')) - print - print "GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]'):" - print - print GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]') - print repr(GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]')) - print + print("Category(pos='n', agr=dict(number='pl', gender='f')):") + print() + print(Category(pos='n', agr=dict(number='pl', gender='f'))) + print(repr(Category(pos='n', agr=dict(number='pl', gender='f')))) + print() + print("GrammarCategory.parse('NP[sem=/NP'):") + print() + print(GrammarCategory.parse(r'NP[sem=]/NP')) + print(repr(GrammarCategory.parse(r'NP[sem=]/NP'))) + print() + print("GrammarCategory.parse('?x/?x'):") + print() + print(GrammarCategory.parse('?x/?x')) + print(repr(GrammarCategory.parse('?x/?x'))) + print() + print("GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]'):") + print() + print(GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]')) + print(repr(GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]'))) + print() g = GrammarFile.read_file("speer.cfg") - print g.grammar() + print(g.grammar()) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/batchtest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/batchtest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/batchtest.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/batchtest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/batchtest.py (refactored) @@ -1,5 +1,5 @@ -from featurechart import * -from treeview import * +from .featurechart import * +from .treeview import * def demo(): cp = load_earley('gazdar6.cfg', trace=2) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/__init__.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/semantics/__init__.py (refactored) @@ -131,7 +131,7 @@ """ # Make sure we're not directly instantiated: if self.__class__ == AbstractParse: - raise AssertionError, "Abstract classes can't be instantiated" + raise AssertionError("Abstract classes can't be instantiated") def parse(self, sentence): return self.get_parse_list(sentence.split()) @@ -155,9 +155,9 @@ line = line.strip() if not line: continue if line.startswith('#'): - print line + print(line) continue - print "Sentence:", line + print("Sentence:", line) parses = self.parse(line) - print "%d parses." % len(parses) - for tree in parses: print tree + print("%d parses." % len(parses)) + for tree in parses: print(tree) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/treeview.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/treeview.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/treeview.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/treeview.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/treeview.py (refactored) @@ -1,4 +1,4 @@ -import Tkinter +import tkinter from nltk.draw import TreeWidget from nltk.draw import CanvasFrame @@ -7,32 +7,32 @@ class TreeView: def __init__(self, trees, root=None): if len(trees) == 0: - print "No trees to display." + print("No trees to display.") return newroot = False if root is None: - root = Tkinter.Tk() + root = tkinter.Tk() window = root newroot = True else: - window = Tkinter.Toplevel(root) + window = tkinter.Toplevel(root) window.title("Parse Tree") window.geometry("600x400") self.cf = CanvasFrame(window) self.cf.pack(side='top', expand=1, fill='both') - buttons = Tkinter.Frame(window) + buttons = tkinter.Frame(window) buttons.pack(side='bottom', fill='x') - self.spin = Tkinter.Spinbox(buttons, from_=1, to=len(trees), + self.spin = tkinter.Spinbox(buttons, from_=1, to=len(trees), command=self.showtree, width=3) if len(trees) > 1: self.spin.pack(side='left') - self.label = Tkinter.Label(buttons, text="of %d" % len(trees)) + self.label = tkinter.Label(buttons, text="of %d" % len(trees)) if len(trees) > 1: self.label.pack(side='left') - self.done = Tkinter.Button(buttons, text="Done", command=window.destroy) + self.done = tkinter.Button(buttons, text="Done", command=window.destroy) self.done.pack(side='right') - self.printps = Tkinter.Button(buttons, text="Print to Postscript", command=self.cf.print_to_file) + self.printps = tkinter.Button(buttons, text="Print to Postscript", command=self.cf.print_to_file) self.printps.pack(side='right') self.trees = trees + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/tree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/test.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/test.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/test.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/test.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/test.py (refactored) @@ -1,10 +1,10 @@ -from featurechart import * -from treeview import * +from .featurechart import * +from .treeview import * def demo(): cp = load_earley('gazdar6.cfg', trace=2) trees = cp.parse('the man who chased Fido returned') - for tree in trees: print tree + for tree in trees: print(tree) #run_profile() if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurelite.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurelite.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurelite.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurelite.py (refactored) @@ -84,7 +84,7 @@ class FeatureI(object): def __init__(self): - raise TypeError, "FeatureI is an abstract interface" + raise TypeError("FeatureI is an abstract interface") class _FORWARD(object): """ @@ -95,7 +95,7 @@ instantiated. """ def __init__(self): - raise TypeError, "The _FORWARD class is not meant to be instantiated" + raise TypeError("The _FORWARD class is not meant to be instantiated") class Variable(object): """ @@ -241,7 +241,7 @@ def variable_representer(dumper, var): "Output variables in YAML as ?name." - return dumper.represent_scalar(u'!var', u'?%s' % var.name()) + return dumper.represent_scalar('!var', '?%s' % var.name()) yaml.add_representer(Variable, variable_representer) def variable_constructor(loader, node): @@ -249,8 +249,8 @@ value = loader.construct_scalar(node) name = value[1:] return Variable(name) -yaml.add_constructor(u'!var', variable_constructor) -yaml.add_implicit_resolver(u'!var', re.compile(r'^\?\w+$')) +yaml.add_constructor('!var', variable_constructor) +yaml.add_implicit_resolver('!var', re.compile(r'^\?\w+$')) def _copy_and_bind(feature, bindings, memo=None): """ @@ -262,14 +262,14 @@ if memo is None: memo = {} if id(feature) in memo: return memo[id(feature)] if isinstance(feature, Variable) and bindings is not None: - if not bindings.has_key(feature.name()): + if feature.name() not in bindings: bindings[feature.name()] = feature.copy() result = _copy_and_bind(bindings[feature.name()], None, memo) else: if isMapping(feature): # Construct a new object of the same class result = feature.__class__() - for (key, value) in feature.items(): + for (key, value) in list(feature.items()): result[key] = _copy_and_bind(value, bindings, memo) else: result = feature memo[id(feature)] = result @@ -579,19 +579,19 @@ if memo is None: memo = {} copymemo = {} - if memo.has_key((id(feature1), id(feature2))): + if (id(feature1), id(feature2)) in memo: result = memo[id(feature1), id(feature2)] if result is UnificationFailure: if trace > 2: - print '(cached) Unifying: %r + %r --> [fail]' % (feature1, feature2) + print('(cached) Unifying: %r + %r --> [fail]' % (feature1, feature2)) raise result() if trace > 2: - print '(cached) Unifying: %r + %r --> ' % (feature1, feature2), - print repr(result) + print('(cached) Unifying: %r + %r --> ' % (feature1, feature2), end=' ') + print(repr(result)) return result if trace > 1: - print 'Unifying: %r + %r --> ' % (feature1, feature2), + print('Unifying: %r + %r --> ' % (feature1, feature2), end=' ') # Make copies of the two structures (since the unification algorithm is # destructive). Use the same memo, to preserve reentrance links between @@ -600,7 +600,7 @@ copy2 = _copy_and_bind(feature2, bindings2, copymemo) # Preserve links between bound variables and the two feature structures. for b in (bindings1, bindings2): - for (vname, value) in b.items(): + for (vname, value) in list(b.items()): value_id = id(value) if value_id in copymemo: b[vname] = copymemo[value_id] @@ -610,7 +610,7 @@ unified = _destructively_unify(copy1, copy2, bindings1, bindings2, memo, fail) except UnificationFailure: - if trace > 1: print '[fail]' + if trace > 1: print('[fail]') memo[id(feature1), id(feature2)] = UnificationFailure raise @@ -622,9 +622,9 @@ _lookup_values(bindings2, {}, remove=True) if trace > 1: - print repr(unified) + printRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurelite.py (repr(unified)) elif trace > 0: - print 'Unifying: %r + %r --> %r' % (feature1, feature2, repr(unified)) + print('Unifying: %r + %r --> %r' % (feature1, feature2, repr(unified))) memo[id(feature1), id(feature2)] = unified return unified @@ -640,11 +640,11 @@ and C{other} are undefined. """ if depth > 50: - print "Infinite recursion in this unification:" - print show(dict(feature1=feature1, feature2=feature2, - bindings1=bindings1, bindings2=bindings2, memo=memo)) - raise ValueError, "Infinite recursion in unification" - if memo.has_key((id(feature1), id(feature2))): + print("Infinite recursion in this unification:") + print(show(dict(feature1=feature1, feature2=feature2, + bindings1=bindings1, bindings2=bindings2, memo=memo))) + raise ValueError("Infinite recursion in unification") + if (id(feature1), id(feature2)) in memo: result = memo[id(feature1), id(feature2)] if result is UnificationFailure: raise result() unified = _do_unify(feature1, feature2, bindings1, bindings2, memo, fail, @@ -687,9 +687,9 @@ # At this point, we know they're both mappings. # Do the destructive part of unification. - while feature2.has_key(_FORWARD): feature2 = feature2[_FORWARD] + while _FORWARD in feature2: feature2 = feature2[_FORWARD] if feature1 is not feature2: feature2[_FORWARD] = feature1 - for (fname, val2) in feature2.items(): + for (fname, val2) in list(feature2.items()): if fname == _FORWARD: continue val1 = feature1.get(fname) feature1[fname] = _destructively_unify(val1, val2, bindings1, @@ -702,12 +702,12 @@ the target of its forward pointer (to preserve reentrance). """ if not isMapping(feature): return - if visited.has_key(id(feature)): return + if id(feature) in visited: return visited[id(feature)] = True - for fname, fval in feature.items(): + for fname, fval in list(feature.items()): if isMapping(fval): - while fval.has_key(_FORWARD): + while _FORWARD in fval: fval = fval[_FORWARD] feature[fname] = fval _apply_forwards(fval, visited) @@ -739,10 +739,10 @@ else: return var.forwarded_self() if not isMapping(mapping): return mapping - if visited.has_key(id(mapping)): return mapping + if id(mapping) in visited: return mapping visited[id(mapping)] = True - for fname, fval in mapping.items(): + for fname, fval in list(mapping.items()): if isMapping(fval): _lookup_values(fval, visited) elif isinstance(fval, Variable): @@ -763,9 +763,9 @@ Replace any feature structures that have been forwarded by their new identities. """ - for (key, value) in bindings.items(): - if isMapping(value) and value.has_key(_FORWARD): - while value.has_key(_FORWARD): + for (key, value) in list(bindings.items()): + if isMapping(value) and _FORWARD in value: + while _FORWARD in value: value = value[_FORWARD] bindings[key] = value + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurechart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurechart.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurechart.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurechart.py (refactored) @@ -18,7 +18,7 @@ #from category import * from nltk import cfg -from featurelite import * +from .featurelite import * def load_earley(filename, trace=1): """ @@ -112,7 +112,7 @@ @return: the value of the left-hand side with variables set. @rtype: C{Category} """ - return apply(TreeEdge.lhs(self), self._vars) + return TreeEdge.lhs(self)(*self._vars) def orig_lhs(self): """ @@ -126,7 +126,7 @@ @return: the value of the right-hand side with variables set. @rtype: C{Category} """ - return tuple(apply(x, self._vars) for x in TreeEdge.rhs(self)) + return tuple(x(*self._vars) for x in TreeEdge.rhs(self)) def orig_rhs(self): """ @@ -161,7 +161,7 @@ left_bindings = left_edge.vars().copy() right_bindings = right_edge.vars().copy() try: - unified = unify(left_edge.next(), right_edge.lhs(), left_bindings, + unified = unify(next(left_edge), right_edge.lhs(), left_bindings, right_bindings, memo=self.unify_memo, trace=self.trace-2) if isinstance(unified, Category): unified.freeze() except UnificationFailure: return @@ -211,7 +211,7 @@ for prod in grammar.productions(): bindings = edge.vars().copy() try: - unified = unify(edge.next(), prod.lhs(), bindings, {}, + unified = unify(next(edge), prod.lhs(), bindings, {}, memo=self.unify_memo, trace=self.trace-2) if isinstance(unified, Category): unified.freeze() except UnificationFailure: @@ -256,7 +256,7 @@ # Width, for printing trace edges. #w = 40/(chart.num_leaves()+1) w = 2 - if self._trace > 0: print ' '*9, chart.pp_leaves(w) + if self._trace > 0: print(' '*9, chart.pp_leaves(w)) # Initialize the chart with a special "starter" edge. root = GrammarCategory(pos='[INIT]') @@ -270,7 +270,7 @@ #scanner = FeatureScannerRule(self._lexicon) for end in range(chart.num_leaves()+1): - if self._trace > 1: print 'Processing queue %d' % end + if self._trace > 1: print('Processing queue %d' % end) # Scanner rule substitute, i.e. this is being used in place # of a proper FeatureScannerRule at the moment. @@ -283,14 +283,14 @@ {}) chart.insert(new_pos_edge, (new_leaf_edge,)) if self._trace > 0: - print 'Scanner ', chart.pp_edge(new_pos_edge,w) + print('Scanner ', chart.pp_edge(new_pos_edge,w)) for edge in chart.select(end=end): if edge.is_incomplete(): for e in predictor.apply(chart, grammar, edge): if self._trace > 1: - print 'Predictor', chart.pp_edge(e,w) + print('Predictor', chart.pp_edge(e,w)) #if edge.is_incomplete(): # for e in scanner.apply(chart, grammar, edge): # if self._trace > 0: @@ -298,7 +298,7 @@ if edge.is_complete(): for e in completer.apply(chart, grammar, edge): if self._trace > 0: - print 'Completer', chart.pp_edge(e,w) + print('Completer', chart.pp_edge(e,w)) # Output a list of complete parses. return chart.parses(root) @@ -346,14 +346,14 @@ return earley_lexicon.get(word.upper(), []) sent = 'I saw John with a dog with my cookie' - print "Sentence:\n", sent + print("Sentence:\n", sent) from nltk import tokenize tokens = list(tokenize.whitespace(sent)) RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/featurechart.py t = time.time() cp = FeatureEarleyChartParse(earley_grammar, lexicon, trace=1) trees = cp.get_parse_list(tokens) - print "Time: %s" % (time.time() - t) - for tree in trees: print tree + print("Time: %s" % (time.time() - t)) + for tree in trees: print(tree) def run_profile(): import profile + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/chart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/chart.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/chart.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/chart.py (refactored) @@ -9,8 +9,8 @@ # # $Id: chart.py 4157 2007-02-28 09:56:25Z stevenbird $ -from __init__ import * -from tree import Tree +from .__init__ import * +from .tree import Tree from nltk import cfg """ @@ -163,7 +163,7 @@ """ raise AssertionError('EdgeI is an abstract interface') - def next(self): + def __next__(self): """ @return: The element of this edge's right-hand side that immediately follows its dot. @@ -272,7 +272,7 @@ def dot(self): return self._dot def is_complete(self): return self._dot == len(self._rhs) def is_incomplete(self): return self._dot != len(self._rhs) - def next(self): + def __next__(self): if self._dot >= len(self._rhs): return None else: return self._rhs[self._dot] @@ -335,7 +335,7 @@ def dot(self): return 0 def is_complete(self): return True def is_incomplete(self): return False - def next(self): return None + def __next__(self): return None # Comparisons & hashing def __cmp__(self, other): @@ -488,12 +488,12 @@ if restrictions=={}: return iter(self._edges) # Find the index corresponding to the given restrictions. - restr_keys = restrictions.keys() + restr_keys = list(restrictions.keys()) restr_keys.sort() restr_keys = tuple(restr_keys) # If it doesn't exist, then create it. - if not self._indexes.has_key(restr_keys): + if restr_keys not in self._indexes: self._add_index(restr_keys) vals = [restrictions[k] for k in restr_keys] @@ -507,7 +507,7 @@ # Make sure it's a valid index. for k in restr_keys: if not hasattr(EdgeI, k): - raise ValueError, 'Bad restriction: %s' % k + raise ValueError('Bad restriction: %s' % k) # Create the index. self._indexes[restr_keys] = {} @@ -539,12 +539,12 @@ C{child_pointer_list} with C{edge}. """ # Is it a new edge? - if not self._edge_to_cpls.has_key(edge): + if edge not in self._edge_to_cpls: # Add it to the list of edges. self._edges.append(edge) # Register with indexes - for (restr_keys, index) in self._indexes.items(): + for (restr_keys, index) in list(self._indexes.items()): vals = [getattr(edge, k)() for k in restr_keys] index = self._indexes[restr_keys] index.setdefault(tuple(vals),[]).append(edge) @@ -553,7 +553,7 @@ cpls = self._edge_to_cpls.setdefault(edge,{}) child_pointer_list = tuple(child_pointer_list) - if cpls.has_key(child_pointer_list): + if child_pointer_list in cpls: # We've already got this CPL; return false. return False else: @@ -600,7 +600,7 @@ than once, we can reuse the same trees. """ # If we've seen this edge before, then reuse our old answer. - if memo.has_key(edge): return memo[edge] + if edge in memo: return memo[edge] trees = [] @@ -676,7 +676,7 @@ been used to form this edge. """ # Make a copy, in case they modify it. - return self._edge_to_cpls.get(edge, {}).keys() + return list(self._edge_to_cpls.get(edge, {}).keys()) #//////////////////////////////////////////////////////////// # Display @@ -838,7 +838,7 @@ @rtype: C{list} of L{EdgeI} @return: A list of the edges that were added. """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') def apply_iter(self, chart, grammar, *edges): """ @@ -853,7 +853,7 @@ that should be passed to C{apply} is specified by the L{NUM_EDGES} class variable. """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') def apply_everywhere(self, chart, grammar): """ @@ -863,7 +863,7 @@ @rtype: C{list} of L{EdgeI} @return: A list of the edges that were added. """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') def apply_everywhere_iter(self, chart, grammar): """ @@ -874,7 +874,7 @@ return. @rtype: C{iter} of L{EdgeI} """ - raise AssertionError, 'ChartRuleI is an abstract interface' + raise AssertionError('ChartRuleI is an abstract interface') class AbstractChartRule(object): """ @@ -892,7 +892,7 @@ # Subclasses must define apply_iter. def apply_iter(self, chart, grammar, *edges): - raise AssertionError, 'AbstractChartRule is an abstract class' + raise AssertionError('AbstractChartRule is an abstract class') # Default: loop through the given number of edges, and call # self.apply() for each set of edges. @@ -920,7 +920,7 @@ yield new_edge else: - raise AssertionError, 'NUM_EDGES>3 is not currently supported' + raise AssertionError('NUM_EDGES>3 is not currently supported') # Default: delegate to apply_iter. def apply(self, chart, grammar, *edges): @@ -952,7 +952,7 @@ def apply_iter(self, chart, grammar, left_edge, right_edge): # Make sure the rule is applicable. if not (left_edge.end() == right_edge.start() and - left_edge.next() == right_edge.lhs() and + next(left_edge) == right_edge.lhs() and left_edge.is_incomplete() and right_edge.is_complete()): return @@ -992,7 +992,7 @@ if edge1.is_incomplete(): # edge1 = left_edge; edge2 = right_edge for edge2 in chart.select(start=edge1.end(), is_complete=True, - lhs=edge1.next()): + lhs=next(edge1)): for new_edge in fr.apply_iter(chart, grammar, edge1, edge2): yield new_edge else: @@ -1051,7 +1051,7 @@ NUM_EDGES = 1 def apply_iter(self, chart, grammar, edge): if edge.is_complete(): return - for prod in grammar.productions(lhs=edge.next()): + for prod in grammar.productions(lhs=next(edge)): new_edge = TreeEdge.from_production(prod, edge.end()) if chart.insert(new_edge, ()): yield new_edge @@ -1070,7 +1070,7 @@ if edge.is_complete() or edge.end() >= chart.num_leaves(): return index = edge.end() leaf = chart.leaf(index) - if edge.next() == leaf: + if next(edge) == leaf: new_edge = LeafEdge(leaf, index) if chart.insert(new_edge, ()): yield new_edge @@ -1118,7 +1118,7 @@ # If we've already applied this rule to an edge with the same # next & end, and the chart & grammar have not changed, then # just return (no new edges to add). - done = self._done.get((edge.next(), edge.end()), (None,None)) + done = self._done.get((next(edge), edge.end()), (None,None)) if done[0] is chart and done[1] is grammar: return # Add all the edges indicated by the top down expand rule. @@ -1126,7 +1126,7 @@ yield e # Record the fact that we've applied this rule. - self._done[edge.next(), edge.end()] = (chart, grammar) + self._done[next(edge), edge.end()] = (chart, grammar) def __str__(self): return 'Top Down Expand Rule' @@ -1218,11 +1218,11 @@ if edge.is_complete() or edge.end()>=chart.num_leaves(): return index = edge.end() leaf = chart.leaf(index) - if edge.next() in self._word_to_pos.get(leaf, []): + if next(edge) in self._word_to_pos.get(leaf, []): new_leaf_edge = LeafEdge(leaf, index) if chart.insert(new_leaf_edge, ()): yield new_leaf_edge - new_pos_edge = TreeEdge((index,index+1), edge.next(), + new_pos_edge = TreeEdge((index,index+1), next(edge), [leaf], 1) if chart.insert(new_pos_edge, (new_leaf_edge,)): yield new_pos_edge @@ -1283,7 +1283,7 @@ # Width, for printing trace edges. w = 50/(chart.num_leaves()+1) - if self._trace > 0: print ' ', chart.pp_leaves(w) + if self._trace > 0: print(' ', chart.pp_leaves(w)) # Initialize the chart with a special "starter" edge. root = cfg.Nonterminal('[INIT]') @@ -1296,20 +1296,20 @@ scanner = ScannerRule(self._lexicon) for end in range(chart.num_leaves()+1): - if self._trace > 1: print 'Processing queue %d' % end + if self._trace > 1: print('Processing queue %d' % end) for edge in chart.select(end=end): if edge.is_incomplete(): for e in predictor.apply(chart, grammar, edge): if self._trace > 0: - print 'Predictor', chart.pp_edge(e,w) + print('Predictor', chart.pp_edge(e,w)) if edge.is_incomplete(): for e in scanner.apply(chart, grammar, edge): if self._trace > 0: - print 'Scanner ', chart.pp_edge(e,w) + print('Scanner ', chart.pp_edge(e,w)) if edge.is_complete(): for e in completer.apply(chart, grammar, edge): if self._trace > 0: - print 'Completer', chart.pp_edge(e,w) + print('Completer', chart.pp_edge(e,w)) # Output a list of complete parses. return chart.parses(grammar.start(), tree_class=tree_class) @@ -1362,7 +1362,7 @@ # Width, for printing trace edges. w = 50/(chart.num_leaves()+1) - if self._trace > 0: print chart.pp_leaves(w) + if self._trace > 0: print(chart.pp_leaves(w)) edges_added = 1 while edges_added > 0: @@ -1371,11 +1371,11 @@ edges_added_by_rule = 0 for e in rule.apply_everywhere(chart, grammar): if self._trace > 0 and edges_added_by_rule == 0: - print '%s:' % rule + print('%s:' % rule) edges_added_by_rule += 1 - if self._trace > 1: print chart.pp_edge(e,w) + if self._trace > 1: print(chart.pp_edge(e,w)) if self._trace == 1 and edges_added_by_rule > 0: - print ' - Added %d edges' % edges_added_by_rule + print(' - Added %d edges' % edges_added_by_rule) edges_added += edges_added_by_rule # Return a list of complete parses. @@ -1437,14 +1437,14 @@ added with the current strategy and grammar. """ if self._chart is None: - raise ValueError, 'Parser must be initialized first' + raise ValueError('Parser must be initialized first') while 1: self._restart = False w = 50/(self._chart.num_leaves()+1) for e in self._parse(): - if self._trace > 1: print self._current_chartrule - if self._trace > 0: print self._chart.pp_edge(e,w) + if self._trace > 1: print(self._current_chartrule) + if self._trace > 0: print(self._chart.pp_edge(e,w)) yield e if self._restart: break else: @@ -1578,23 +1578,23 @@ # Tokenize a sample sentence. sent = 'I saw John with a dog with my cookie' - print "Sentence:\n", sent + RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/chart.py print("Sentence:\n", sent) from nltk import tokenize tokens = list(tokenize.whitespace(sent)) - print tokens + print(tokens) # Ask the user which parser to test - print ' 1: Top-down chart parser' - print ' 2: Bottom-up chart parser' - print ' 3: Earley parser' - print ' 4: Stepping chart parser (alternating top-down & bottom-up)' - print ' 5: All parsers' - print '\nWhich parser (1-5)? ', + print(' 1: Top-down chart parser') + print(' 2: Bottom-up chart parser') + print(' 3: Earley parser') + print(' 4: Stepping chart parser (alternating top-down & bottom-up)') + print(' 5: All parsers') + print('\nWhich parser (1-5)? ', end=' ') choice = sys.stdin.readline().strip() - print + print() if choice not in '12345': - print 'Bad parser number' + print('Bad parser number') return # Keep track of how long each parser takes. @@ -1607,7 +1607,7 @@ parses = cp.get_parse_list(tokens) times['top down'] = time.time()-t assert len(parses)==5, 'Not all parses found' - for tree in parses: print tree + for tree in parses: print(tree) # Run the bottom-up parser, if requested. if choice in ('2', '5'): @@ -1616,7 +1616,7 @@ parses = cp.get_parse_list(tokens) times['bottom up'] = time.time()-t assert len(parses)==5, 'Not all parses found' - for tree in parses: print tree + for tree in parses: print(tree) # Run the earley, if requested. if choice in ('3', '5'): @@ -1625,7 +1625,7 @@ parses = cp.get_parse_list(tokens) times['Earley parser'] = time.time()-t assert len(parses)==5, 'Not all parses found' - for tree in parses: print tree + for tree in parses: print(tree) # Run the stepping parser, if requested. if choice in ('4', '5'): @@ -1633,24 +1633,24 @@ cp = SteppingChartParse(grammar, trace=1) cp.initialize(tokens) for i in range(5): - print '*** SWITCH TO TOP DOWN' + print('*** SWITCH TO TOP DOWN') cp.set_strategy(TD_STRATEGY) for j, e in enumerate(cp.step()): if j>20 or e is None: break - print '*** SWITCH TO BOTTOM UP' + print('*** SWITCH TO BOTTOM UP') cp.set_strategy(BU_STRATEGY) for j, e in enumerate(cp.step()): if j>20 or e is None: break times['stepping'] = time.time()-t assert len(cp.parses())==5, 'Not all parses found' - for parse in cp.parses(): print parse + for parse in cp.parses(): print(parse) # Print the times of all parsers: - maxlen = max(len(key) for key in times.keys()) - format = '%' + `maxlen` + 's parser: %6.3fsec' - times_items = times.items() + maxlen = max(len(key) for key in list(times.keys())) + format = '%' + repr(maxlen) + 's parser: %6.3fsec' + times_items = list(times.items()) times_items.sort(lambda a,b:cmp(a[1], b[1])) for (parser, t) in times_items: - print format % (parser, t) + print(format % (parser, t)) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/cfg.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/cfg.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/cfg.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/cfg.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/cfg.py (refactored) @@ -226,8 +226,8 @@ @param rhs: The right-hand side of the new C{Production}. @type rhs: sequence of (C{Nonterminal} and (terminal)) """ - if isinstance(rhs, (str, unicode)): - raise TypeError, 'production right hand side should be a list, not a string' + if isinstance(rhs, str): + raise TypeError('production right hand side should be a list, not a string') self._lhs = lhs self._rhs = tuple(rhs) self._hash = hash((self._lhs, self._rhs)) @@ -385,7 +385,7 @@ """ # Use _PARSE_RE to check that it's valid. if not _PARSE_RE.match(s): - raise ValueError, 'Bad production string' + raise ValueError('Bad production string') # Use _SPLIT_RE to process it. pieces = _SPLIT_RE.split(s) pieces = [p for i,p in enumerate(pieces) if i%2==1] @@ -407,9 +407,9 @@ if line.startswith('#') or line=='': continue try: productions += parse_production(line) except ValueError: - raise ValueError, 'Unable to parse line %s' % linenum + raise ValueError('Unable to parse line %s' % linenum) if len(productions) == 0: - raise ValueError, 'No productions found!' + raise ValueError('No productions found!') start = productions[0].lhs() return Grammar(start, productions) @@ -429,11 +429,11 @@ N, V, P, Det = cfg.nonterminals('N, V, P, Det') VP_slash_NP = VP/NP - print 'Some nonterminals:', [S, NP, VP, PP, N, V, P, Det, VP/NP] - print ' S.symbol() =>', `S.symbol()` - print - - print cfg.Production(S, [NP]) + print('Some nonterminals:', [S, NP, VP, PP, N, V, P, Det, VP/NP]) + print(' S.symbol() =>', repr(S.symbol())) + print() + + print(cfg.Production(S, [NP])) # Create some Grammar Productions grammar = cfg.parse_grammar(""" @@ -453,11 +453,11 @@ P -> 'in' """) - print 'A Grammar:', `grammar` - print ' grammar.start() =>', `grammar.start()` - print ' grammar.productions() =>', + print('A Grammar:', repr(grammar)) + print(' grammar.start() =>', repr(grammar.start())) + print(' grammar.productions() =>', end=' ') # Use string.replace(...) is to line-wrap the output. - print `grammar.productions()`.replace(',', ',\n'+' '*25) - print + print(repr(grammar.productions()).replace(',', ',\n'+' '*25)) + print() if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/category.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/category.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/category.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/category.py (refactored) @@ -11,10 +11,10 @@ # $Id: category.py 4162 2007-03-01 00:46:05Z stevenbird $ from nltk.semantics import logic -from cfg import * +from .cfg import * from kimmo import kimmo -from featurelite import * +from .featurelite import * from copy import deepcopy import yaml # import nltk.yamltags @@ -130,16 +130,16 @@ self._features[key] = value def items(self): - return self._features.items() + return list(self._features.items()) def keys(self): - return self._features.keys() + return list(self._features.keys()) def values(self): - return self._features.values() + return list(self._features.values()) def has_key(self, key): - return self._features.has_key(key) + return key in self._features def symbol(self): """ @@ -168,7 +168,7 @@ """ @return: a list of all features that have values. """ - return self._features.keys() + return list(self._features.keys()) has_feature = has_key @@ -183,7 +183,7 @@ @staticmethod def _remove_unbound_vars(obj): - for (key, value) in obj.items(): + for (key, value) in list(obj.items()): if isinstance(value, Variable): del obj[key] elif isinstance(value, (Category, dict)): @@ -210,7 +210,7 @@ def _str(cls, obj, reentrances, reentrance_ids): segments = [] - keys = obj.keys() + keys = list(obj.keys()) keys.sort() for fname in keys: if fname == cls.headname: continue @@ -391,14 +391,14 @@ # Semantic value of the form '; return an ApplicationExpression match = _PARSE_RE['application'].match(s, position) if match is not None: - fun = ParserSubstitute(match.group(2)).next() - arg = ParserSubstitute(match.group(3)).next() + fun = next(ParserSubstitute(match.group(2))) + arg = next(ParserSubstitute(match.group(3))) return ApplicationExpressionSubst(fun, arg), match.end() # other semantic value enclosed by '< >'; return value given by the lambda expr parser match = _PARSE_RE['semantics'].match(s, position) if match is not None: - return ParserSubstitute(match.group(1)).next(), match.end() + return next(ParserSubstitute(match.group(1))), match.end() # String value if s[position] in "'\"": @@ -457,11 +457,11 @@ try: lhs, position = cls.inner_parse(s, position) lhs = cls(lhs) - except ValueError, e: + except ValueError as e: estr = ('Error parsing field structure\n\n\t' + s + '\n\t' + ' '*e.args[1] + '^ ' + 'Expected %s\n' % e.args[0]) - raise ValueError, estr + raise ValueError(estr) lhs.freeze() match = _PARSE_RE['arrow'].match(s, position) @@ -475,11 +475,11 @@ try: val, position = cls.inner_parse(s, position, {}) if isinstance(val, dict): val = cls(val) - except ValueError, e: + except ValueError as e: estr = ('Error parsing field structure\n\n\t' + s + '\n\t' + ' '*e.args[1] + '^ ' + 'Expected %s\n' % e.args[0]) - raise ValueError, estr + raise ValueError(estr) if isinstance(val, Category): val.freeze() rhs.append(val) position = _PARSE_RE['whitespace'].match(s, position).end() @@ -521,7 +521,7 @@ def _str(cls, obj, reentrances, reentrance_ids): segments = [] - keys = obj.keys() + keys = list(obj.keys()) keys.sort() for fname in kRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/category.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/category.py ### RefactoringTool: Line 129: could not convert: raise "Cannot modify a frozen Category" RefactoringTool: Python 3 does not support string exceptions eys: if fname == cls.headname: continue @@ -576,9 +576,9 @@ if slash_match is not None: position = slash_match.end() slash, position = GrammarCategory._parseval(s, position, reentrances) - if isinstance(slash, basestring): slash = {'pos': slash} + if isinstance(slash, str): slash = {'pos': slash} body['/'] = unify(body.get('/'), slash) - elif not body.has_key('/'): + elif '/' not in body: body['/'] = False return cls(body), position @@ -652,7 +652,7 @@ return lookup def earley_parser(self, trace=1): - from featurechart import FeatureEarleyChartParse + from .featurechart import FeatureEarleyChartParse if self.kimmo is None: lexicon = self.earley_lexicon() else: lexicon = self.kimmo_lexicon() @@ -706,28 +706,28 @@ yaml.add_representer(GrammarCategory, GrammarCategory.to_yaml) def demo(): - print "Category(pos='n', agr=dict(number='pl', gender='f')):" - print - print Category(pos='n', agr=dict(number='pl', gender='f')) - print repr(Category(pos='n', agr=dict(number='pl', gender='f'))) - print - print "GrammarCategory.parse('NP/NP'):" - print - print GrammarCategory.parse('NP/NP') - print repr(GrammarCategory.parse('NP/NP')) - print - print "GrammarCategory.parse('?x/?x'):" - print - print GrammarCategory.parse('?x/?x') - print repr(GrammarCategory.parse('?x/?x')) - print - print "GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]'):" - print - print GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]') - print repr(GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]')) - print + print("Category(pos='n', agr=dict(number='pl', gender='f')):") + print() + print(Category(pos='n', agr=dict(number='pl', gender='f'))) + print(repr(Category(pos='n', agr=dict(number='pl', gender='f')))) + print() + print("GrammarCategory.parse('NP/NP'):") + print() + print(GrammarCategory.parse('NP/NP')) + print(repr(GrammarCategory.parse('NP/NP'))) + print() + print("GrammarCategory.parse('?x/?x'):") + print() + print(GrammarCategory.parse('?x/?x')) + print(repr(GrammarCategory.parse('?x/?x'))) + print() + print("GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]'):") + print() + print(GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]')) + print(repr(GrammarCategory.parse('VP[+fin, agr=?x, tense=past]/NP[+pl, agr=?x]'))) + print() g = GrammarFile.read_file("speer.cfg") - print g.grammar() + print(g.grammar()) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/__init__.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/parse/__init__.py (refactored) @@ -131,7 +131,7 @@ """ # Make sure we're not directly instantiated: if self.__class__ == AbstractParse: - raise AssertionError, "Abstract classes can't be instantiated" + raise AssertionError("Abstract classes can't be instantiated") def parse(self, sentence): return self.get_parse_list(sentence.split()) @@ -155,11 +155,11 @@ line = line.strip() if not line: continue if line.startswith('#'): - print line + print(line) continue - print "Sentence:", line + print("Sentence:", line) parses = self.parse(line) - print "%d parses." % len(parses) - for tree in parses: print tree + print("%d parses." % len(parses)) + for tree in parses: print(tree) from nltk.parse import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/rules.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/rules.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/rules.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/rules.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/rules.py (refactored) @@ -1,7 +1,7 @@ from nltk.parse import Tree -from fsa import FSA +from .fsa import FSA from nltk import tokenize -from pairs import KimmoPair, sort_subsets +from .pairs import KimmoPair, sort_subsets from copy import deepcopy import re, yaml @@ -65,13 +65,11 @@ def parse_table(name, table, subsets): lines = table.split('\n') if len(lines) < 4: - raise ValueError,\ - "Rule %s has too few lines to be an FSA table." % name + raise ValueError("Rule %s has too few lines to be an FSA table." % name) pairs1 = lines[1].strip().split() pairs2 = lines[2].strip().split() if len(pairs1) != len(pairs2): - raise ValueError,\ - "Rule %s has pair definitions that don't line up." % name + raise ValueError("Rule %s has pair definitions that don't line up." % name) pairs = [KimmoPair(p1, p2) for p1, p2 in zip(pairs1, pairs2)] finals = [] fsa = FSA() @@ -80,18 +78,16 @@ if not line: continue groups = re.match(r'(\w+)(\.|:)\s*(.*)', line) if groups is None: - raise ValueError,\ - "Can't parse this line of the state table for rule %s:\n%s"\ - % (name, line) + raise ValueError("Can't parse this line of the state table for rule %s:\n%s"\ + % (name, line)) state, char, morestates = groups.groups() if fsa.start() == 0: fsa.set_start(state) if char == ':': finals.append(state) fsa.add_state(state) morestates = morestates.split() if len(morestates) != len(pairs): - raise ValueError,\ - "Rule %s has a row of the wrong length:\n%s\ngot %d items, should be %d"\ - % (name, line, len(morestates), len(pairs)) + raise ValueError("Rule %s has a row of the wrong length:\n%s\ngot %d items, should be %d"\ + % (name, line, len(morestates), len(pairs))) for pair, nextstate in zip(pairs, morestates): fsa.insert_safe(state, pair, nextstate) fsa.set_final(finals) @@ -101,11 +97,11 @@ def from_dfa_dict(name, states, subsets): fsa = FSA() pairs = set([KimmoPair.make('@')]) - for (statename, trans) in states.items(): + for (statename, trans) in list(states.items()): for label in trans: if label != 'others': pairs.add(KimmoPair.make(label)) - for (statename, trans) in states.items(): + for (statename, trans) in list(states.items()): parts = statename.split() source = parts[-1] if not parts[0].startswith('rej'): @@ -120,7 +116,7 @@ for label in trans: if label != 'others': used_pairs.add(KimmoPair.make(label)) - for label, target in trans.items(): + for label, target in list(trans.items()): if label.lower() == 'others': fsa.insert_safe(source, KimmoPair.make('@'), target) for pair in pairs.difference(used_pairs): @@ -366,11 +362,11 @@ def demo(): rule = KimmoArrowRule("elision-e", "e:0 <== CN u _ +:@ VO", {'@': 'aeiouhklmnpw', 'VO': 'aeiou', 'CN': 'hklmnpw'}) - print rule - print rule._left_fsa - print rule._right_fsa - print - print rule._fsa + print(rule) + print(rule._left_fsa) + print(rule._right_fsa) + print() + print(rule._fsa) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/pairs.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/pairs.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/pairs.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/pairs.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/pairs.py (refactored) @@ -66,5 +66,5 @@ parts = text.split(':') if len(parts) == 1: return KimmoPair(text, text) elif len(parts) == 2: return KimmoPair(parts[0], parts[1]) - else: raise ValueError, "Bad format for pair: %s" % text + else: raise ValueError("Bad format for pair: %s" % text) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/morphology.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/morphology.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/morphology.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/morphology.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/morphology.py (refactored) @@ -1,6 +1,6 @@ -from fsa import FSA +from .fsa import FSA import yaml -from featurelite import unify +from .featurelite import unify def startswith(stra, strb): return stra[:len(strb)] == strb @@ -44,14 +44,14 @@ def fsa(self): return self._fsa def valid_lexical(self, state, word, alphabet): trans = self.fsa()._transitions[state] - for label in trans.keys(): + for label in list(trans.keys()): if label is not None and startswith(label[0], word) and len(label[0]) > len(word): next = label[0][len(word):] for pair in alphabet: if startswith(next, pair.input()): yield pair.input() def next_states(self, state, word): choices = self.fsa()._transitions[state] - for (key, value) in choices.items(): + for (key, value) in list(choices.items()): if key is None: if word == '': for next in value: yield (next, None) @@ -102,11 +102,11 @@ word = '' fsa.insert_safe(state, (word, features), next) else: - print "Ignoring line in morphology: %r" % line + print("Ignoring line in morphology: %r" % line) return KimmoMorphology(fsa) def demo(): - print KimmoMorphology.load('english.lex') + print(KimmoMorphology.load('english.lex')) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmotest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmotest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmotest.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmotest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmotest.py (refactored) @@ -1,4 +1,4 @@ -from kimmo import * +from .kimmo import * k = KimmoRuleSet.load('english.yaml') -print list(k.generate('`slip+ed', TextTrace(3))) -print list(k.recognize('slipped', TextTrace(1))) +print(list(k.generate('`slip+ed', TextTrace(3)))) +print(list(k.recognize('slipped', TextTrace(1)))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmo.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmo.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmo.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmo.py (refactored) @@ -2,15 +2,15 @@ # by Rob Speer (rspeer@mit.edu) # based on code from Carl de Marcken, Beracah Yankama, and Rob Speer -from rules import KimmoArrowRule, KimmoFSARule -from pairs import KimmoPair, sort_subsets -from morphology import * -from fsa import FSA +from .rules import KimmoArrowRule, KimmoFSARule +from .pairs import KimmoPair, sort_subsets +from .morphology import * +from .fsa import FSA import yaml def _pairify(state): newstate = {} - for label, targets in state.items(): + for label, targets in list(state.items()): newstate[KimmoPair.make(label)] = targets return newstate @@ -191,7 +191,7 @@ def _advance_rule(self, rule, state, pair): trans = rule.fsa()._transitions[state] - expected_pairs = sort_subsets(trans.keys(), self._subsets) + expected_pairs = sort_subsets(list(trans.keys()), self._subsets) for comppair in expected_pairs: if comppair.includes(pair, self._subsets): return rule.fsa().nextState(state, comppair) @@ -200,16 +200,16 @@ def _test_case(self, input, outputs, arrow, method): outputs.sort() if arrow == '<=': - print '%s %s %s' % (', '.join(outputs), arrow, input) + print('%s %s %s' % (', '.join(outputs), arrow, input)) else: - print '%s %s %s' % (input, arrow, ', '.join(outputs)) + print('%s %s %s' % (input, arrow, ', '.join(outputs))) value = method(input) if len(value) and isinstance(value[0], tuple): results = [v[0] for v in value] else: results = value results.sort() if outputs != results: - print ' Failed: got %s' % (', '.join(results) or 'no results') + print(' Failed: got %s' % (', '.join(results) or 'no results')) return False else: return True @@ -244,7 +244,7 @@ arrow = arrow_to_try break if arrow is None: - raise ValueError, "Can't find arrow in line: %s" % line + raise ValueError("Can't find arrow in line: %s" % line) lexicals = lexicals.strip().split(', ') surfaces = surfaces.strip().split(', ') if lexicals == ['']: lexicals = [] @@ -348,28 +348,28 @@ if lexicon: lexicon = KimmoMorphology.load(lexicon) subsets = map['subsets'] - for key, value in subsets.items(): - if isinstance(value, basestring): + for key, value in list(subsets.items()): + if isinstance(value, str): subsets[key] = value.split() defaults = map['defaults'] - if isinstance(defaults, basestring): + if isinstance(defaults, str): defaults = defaults.split() defaults = [KimmoPair.make(text) for text in defaults] ruledic = map['rules'] rules = [] - for (name, rule) in ruledic.items(): + for (name, rule) in list(ruledic.items()): if isinstance(rule, dict): rules.append(KimmoFSARule.from_dfa_dict(name, rule, subsets)) - elif isinstance(rule, basestring): + elif isinstance(rule, str): if rule.strip().startswith('FSA'): rules.append(KimmoFSARule.parse_table(name, rule, subsets)) else: rules.append(KimmoArrowRule(name, rule, subsets)) else: - raise ValueError, "Can't recognize the data structure in '%s' as a rule: %s" % (name, rule) + raise ValueError("Can't recognize the data structure in '%s' as a rule: %s" % (name, rule)) return cls(subsets, defaults, rules, lexicon) def gui(self, startTk=True): - import draw + from . import draw return draw.KimmoGUI(self, startTk) draw_graphs = guRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/kimmo.py i @@ -392,50 +392,50 @@ surface = ''.join(p.output() for p in pairs) indent = ' '*len(lexical) if self.verbosity > 2: - print '%s%s<%s>' % (indent, lexical, curr.input()) - print '%s%s<%s>' % (indent, surface, curr.output()) + print('%s%s<%s>' % (indent, lexical, curr.input())) + print('%s%s<%s>' % (indent, surface, curr.output())) for rule, state1, state2 in zip(rules, prev_states, states): - print '%s%s: %s => %s' % (indent, rule.name(), state1, state2) + print('%s%s: %s => %s' % (indent, rule.name(), state1, state2)) if morphology_state: - print '%sMorphology: %r => %s' % (indent, word, morphology_state) - print + print('%sMorphology: %r => %s' % (indent, word, morphology_state)) + print() elif self.verbosity > 1: - print '%s%s<%s>' % (indent, lexical, curr.input()) - print '%s%s<%s>' % (indent, surface, curr.output()) - z = zip(prev_states, states) + print('%s%s<%s>' % (indent, lexical, curr.input())) + print('%s%s<%s>' % (indent, surface, curr.output())) + z = list(zip(prev_states, states)) if morphology_state: z.append((word, morphology_state)) - print indent + (" ".join('%s>%s' % (old, new) for old, new in z)) + print(indent + (" ".join('%s>%s' % (old, new) for old, new in z))) blocked = [] for rule, state in zip(rules, states): if str(state).lower() in ['0', 'reject']: blocked.append(rule.name()) if blocked: - print '%s[blocked by %s]' % (indent, ", ".join(blocked)) - print + print('%s[blocked by %s]' % (indent, ", ".join(blocked))) + print() else: - print '%s%s<%s> | %s<%s>' % (indent, lexical, curr.input(), - surface, curr.output()), + print('%s%s<%s> | %s<%s>' % (indent, lexical, curr.input(), + surface, curr.output()), end=' ') if morphology_state: - print '\t%r => %s' % (word, morphology_state), + print('\t%r => %s' % (word, morphology_state), end=' ') blocked = [] for rule, state in zip(rules, states): if str(state).lower() in ['0', 'reject']: blocked.append(rule.name()) if blocked: - print ' [blocked by %s]' % (", ".join(blocked)), - print + print(' [blocked by %s]' % (", ".join(blocked)), end=' ') + print() def succeed(self, pairs): lexical = ''.join(p.input() for p in pairs) surface = ''.join(p.output() for p in pairs) indent = ' '*len(lexical) - print '%s%s' % (indent, lexical) - print '%s%s' % (indent, surface) - print '%sSUCCESS: %s <=> %s' % (indent, lexical, surface) - print - print + print('%s%s' % (indent, lexical)) + print('%s%s' % (indent, surface)) + print('%sSUCCESS: %s <=> %s' % (indent, lexical, surface)) + print() + print() def load(filename): """ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/fsa.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/fsa.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/fsa.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/fsa.py (refactored) @@ -63,8 +63,8 @@ A generator that yields each transition arrow in the FSA in the form (source, label, target). """ - for (state, map) in self._transitions.items(): - for (symbol, targets) in map.items(): + for (state, map) in list(self._transitions.items()): + for (symbol, targets) in list(map.items()): for target in targets: yield (state, symbol, target) @@ -73,7 +73,7 @@ A generator for all possible labels taking state s1 to state s2. """ map = self._transitions.get(s1, {}) - for (symbol, targets) in map.items(): + for (symbol, targets) in list(map.items()): if s2 in targets: yield symbol def sigma(self): @@ -134,7 +134,7 @@ @returns: a list of all states in the FSA. @rtype: list """ - return self._transitions.keys() + return list(self._transitions.keys()) def add_final(self, state): """ @@ -184,11 +184,11 @@ @param s2: the destination of the transition """ if s1 not in self.states(): - raise ValueError, "State %s does not exist in %s" % (s1, - self.states()) + raise ValueError("State %s does not exist in %s" % (s1, + self.states())) if s2 not in self.states(): - raise ValueError, "State %s does not exist in %s" % (s2, - self.states()) + raise ValueError("State %s does not exist in %s" % (s2, + self.states())) self._add_transition(self._transitions, s1, label, s2) self._add_transition(self._reverse, s2, label, s1) @@ -212,16 +212,16 @@ @param s2: the destination of the transition """ if s1 not in self.states(): - raise ValueError, "State %s does not exist" % s1 + raise ValueError("State %s does not exist" % s1) if s2 not in self.states(): - raise ValueError, "State %s does not exist" % s1 + raise ValueError("State %s does not exist" % s1) self._del_transition(self._transitions, s1, label, s2) self._del_transition(self._reverse, s2, label, s1) def delete_state(self, state): "Removes a state and all its transitions from the FSA." if state not in self.states(): - raise ValueError, "State %s does not exist" % state + raise ValueError("State %s does not exist" % state) for (s1, label, s2) in self.incident_transitions(state): self.delete(s1, label, s2) del self._transitions[state] @@ -235,10 +235,10 @@ result = set() forward = self._transitions[state] backward = self._reverse[state] - for label, targets in forward.items(): + for label, targets in list(forward.items()): for target in targets: result.add((state, label, target)) - for label, targets in backward.items(): + for label, targets in list(backward.items()): for target in targets: result.add((target, label, state)) return result @@ -248,9 +248,9 @@ Assigns a state a new identifier. """ if old not in self.states(): - raise ValueError, "State %s does not exist" % old + raise ValueError("State %s does not exist" % old) if new in self.states(): - raise ValueError, "State %s already exists" % new + raise ValueError("State %s already exists" % new) changes = [] for (s1, symbol, s2) in self.generate_transitions(): if s1 == old and s2 == old: @@ -261,7 +261,7 @@ changes.append((s1, symbol, s2, s1, symbol, new)) for (leftstate, symbol, rightstate, newleft, newsym, newright)\ in changes: - print leftstate, symbolRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/fsa.py , rightstate, newleft, newsym, newright + print(leftstate, symbol, rightstate, newleft, newsym, newright) self.delete(leftstate, symbol, rightstate) self.insert_safe(newleft, newsym, newright) del self._transitions[old] @@ -284,8 +284,8 @@ Return whether this is a DFA (every symbol leads from a state to at most one target state). """ - for map in self._transitions.values(): - for targets in map.values(): + for map in list(self._transitions.values()): + for targets in list(map.values()): if len(targets) > 1: return False return True @@ -297,14 +297,14 @@ """ next = self.next(state, symbol) if len(next) > 1: - raise ValueError, "This FSA is nondeterministic -- use nextStates instead." + raise ValueError("This FSA is nondeterministic -- use nextStates instead.") elif len(next) == 1: return list(next)[0] else: return None def forward_traverse(self, state): "All states reachable by following transitions from a given state." result = set() - for (symbol, targets) in self._transitions[state].items(): + for (symbol, targets) in list(self._transitions[state].items()): result = result.union(targets) return result @@ -312,7 +312,7 @@ """All states from which a given state is reachable by following transitions.""" result = set() - for (symbol, targets) in self._reverse[state].items(): + for (symbol, targets) in list(self._reverse[state].items()): result = result.union(targets) return result @@ -344,7 +344,7 @@ self._clean_map(self._reverse[state]) def _clean_map(self, map): - for (key, value) in map.items(): + for (key, value) in list(map.items()): if len(value) == 0: del map[key] @@ -406,7 +406,7 @@ for label in self.sigma(): nfa_next = tuple(self.e_closure(self.move(map[dfa_state], label))) - if map.has_key(nfa_next): + if nfa_next in map: dfa_next = map[nfa_next] else: dfa_next = dfa.new_state() @@ -422,7 +422,7 @@ "Generate all accepting sequences of length at most maxlen." if maxlen > 0: if state in self._finals: - print prefix + print(prefix) for (s1, labels, s2) in self.outgoing_transitions(state): for label in labels(): self.generate(maxlen-1, s2, prefix+label) @@ -431,14 +431,14 @@ """ Print a representation of this FSA (in human-readable YAML format). """ - print yaml.dump(self) + print(yaml.dump(self)) @classmethod def from_yaml(cls, loader, node): map = loader.construct_mapping(node) result = cls(map.get('sigma', []), {}, map.get('finals', [])) - for (s1, map1) in map['transitions'].items(): - for (symbol, targets) in map1.items(): + for (s1, map1) in list(map['transitions'].items()): + for (symbol, targets) in list(map1.items()): for s2 in targets: result.insert(s1, symbol, s2) return result @@ -590,19 +590,19 @@ # Use a regular expression to initialize the FSA. re = 'abcd' - print 'Regular Expression:', re + print('Regular Expression:', re) re2nfa(fsa, re) - print "NFA:" + print("NFA:") fsa.pp() # Convert the (nondeterministic) FSA to a deterministic FSA. dfa = fsa.dfa() - print "DFA:" + print("DFA:") dfa.pp() # Prune the DFA dfa.prune() - print "PRUNED DFA:" + print("PRUNED DFA:") dfa.pp() # Use the FSA to generate all strings of length less than 3 + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/featurelite.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/featurelite.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/featurelite.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/featurelite.py (refactored) @@ -91,7 +91,7 @@ instantiated. """ def __init__(self): - raise TypeError, "The _FORWARD class is not meant to be instantiated" + raise TypeError("The _FORWARD class is not meant to be instantiated") class Variable(object): """ @@ -237,7 +237,7 @@ def variable_representer(dumper, var): "Output variables in YAML as ?name." - return dumper.represent_scalar(u'!var', u'?%s' % var.name()) + return dumper.represent_scalar('!var', '?%s' % var.name()) yaml.add_representer(Variable, variable_representer) def variable_constructor(loader, node): @@ -245,8 +245,8 @@ value = loader.construct_scalar(node) name = value[1:] return Variable(name) -yaml.add_constructor(u'!var', variable_constructor) -yaml.add_implicit_resolver(u'!var', re.compile(r'^\?\w+$')) +yaml.add_constructor('!var', variable_constructor) +yaml.add_implicit_resolver('!var', re.compile(r'^\?\w+$')) def _copy_and_bind(feature, bindings, memo=None): """ @@ -258,14 +258,14 @@ if memo is None: memo = {} if id(feature) in memo: return memo[id(feature)] if isinstance(feature, Variable) and bindings is not None: - if not bindings.has_key(feature.name()): + if feature.name() not in bindings: bindings[feature.name()] = feature.copy() result = _copy_and_bind(bindings[feature.name()], None, memo) else: if isMapping(feature): # Construct a new object of the same class result = feature.__class__() - for (key, value) in feature.items(): + for (key, value) in list(feature.items()): result[key] = _copy_and_bind(value, bindings, memo) else: result = feature memo[id(feature)] = result @@ -576,7 +576,7 @@ copy2 = _copy_and_bind(feature2, bindings2, copymemo) # Preserve links between bound variables and the two feature structures. for b in (bindings1, bindings2): - for (vname, value) in b.items(): + for (vname, value) in list(b.items()): value_id = id(value) if value_id in copymemo: b[vname] = copymemo[value_id] @@ -602,7 +602,7 @@ UnificationFailure is raised, and the values of C{self} and C{other} are undefined. """ - if memo.has_key((id(feature1), id(feature2))): + if (id(feature1), id(feature2)) in memo: return memo[id(feature1), id(feature2)] unified = _do_unify(feature1, feature2, bindings1, bindings2, memo, fail) memo[id(feature1), id(feature2)] = unified @@ -643,9 +643,9 @@ # At this point, we know they're both mappings. # Do the destructive part of unification. - while feature2.has_key(_FORWARD): feature2 = feature2[_FORWARD] + while _FORWARD in feature2: feature2 = feature2[_FORWARD] feature2[_FORWARD] = feature1 - for (fname, val2) in feature2.items(): + for (fname, val2) in list(feature2.items()): if fname == _FORWARD: continue val1 = feature1.get(fname) feature1[fname] = _destructively_unify(val1, val2, bindings1, @@ -658,12 +658,12 @@ the target of its forward pointer (to preserve reentrance). """ if not isMapping(feature): return - if visited.has_key(id(feature)): return + if id(feature) in visited: return visited[id(feature)] = True - for fname, fval in feature.items(): + for fname, fval in list(feature.items()): if isMapping(fval): - while fval.has_key(_FORWARD): + while _FORWARD in fval: fval = fval[_FORWARD] feature[fname] = fval _apply_forwards(fval, visited) @@ -695,10 +695,10 @@ else: return var.forwarded_self() if not isMapping(mapping): return mapping - if visited.has_key(id(mapping)): return mapping + if id(mapping) in visited: return mapping visited[id(mapping)] = TRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/featurelite.py rue - for fname, fval in mapping.items(): + for fname, fval in list(mapping.items()): if isMapping(fval): _lookup_values(fval, visited) elif isinstance(fval, Variable): @@ -719,9 +719,9 @@ Replace any feature structures that have been forwarded by their new identities. """ - for (key, value) in bindings.items(): - if isMapping(value) and value.has_key(_FORWARD): - while value.has_key(_FORWARD): + for (key, value) in list(bindings.items()): + if isMapping(value) and _FORWARD in value: + while _FORWARD in value: value = value[_FORWARD] bindings[key] = value + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/draw.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/draw.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/draw.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/draw.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/draw.py (refactored) @@ -1,10 +1,10 @@ -import Tkinter as tk -from morphology import KimmoMorphology -from fsa import FSA +import tkinter as tk +from .morphology import KimmoMorphology +from .fsa import FSA class KimmoGUI(object): def __init__(self, ruleset, startTk=False): - import Tkinter as tk + import tkinter as tk if startTk: self._root = tk.Tk() else: self._root = tk.Toplevel() @@ -131,7 +131,7 @@ def highlight_states(self, states, morph): select = self.listbox.curselection() or 0 self.listbox.delete(0, tk.END) - for (index, stored) in self.widget_store.items(): + for (index, stored) in list(self.widget_store.items()): graph, widget = stored if index == -1: state = morph else: state = states[index] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/__init__.py --- ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/mit/six863/kimmo/__init__.py (refactored) @@ -1 +1 @@ -from kimmo import * +from .kimmo import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/six863/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/mit/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/paradigmquery.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/paradigmquery.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/paradigmquery.py --- ../python3/nltk_contrib/nltk_contrib/misc/paradigmquery.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/paradigmquery.py (refactored) @@ -47,7 +47,7 @@ self.xml = None # If p_string was given, parse it - if p_string <> None: + if p_string != None: self.parse(p_string) def parse(self, p_string): @@ -124,7 +124,7 @@ try: self.parseList = rd_parser.get_parse_list(toklist)[0] except IndexError: - print "Could not parse query." + print("Could not parse query.") return # Set the nltk.parse.tree tree for this query to the global sentence @@ -142,13 +142,13 @@ Returns the results from the CFG parsing """ if self.string == None: - print "No string has been parsed. Please use parse(string)." + print("No string has been parsed. Please use parse(string).") return None return self.nltktree def getXML(self): if self.string == None: - print "No string has been parsed. Please use parse(string)." + print("No string has been parsed. Please use parse(string).") return None return '\n' + self.xml \ + "" @@ -279,16 +279,16 @@ query = r'table(one/two/three, four, five)' # Print the query - print """ + print(""" ================================================================================ Query: ParadigmQuery(query) ================================================================================ -""" +""") a = ParadigmQuery(query) - print query + print(query) # Print the Tree representation - print """ + print(""" ================================================================================ Tree: getTree() O is an operator @@ -296,19 +296,19 @@ H is a hierarchy D is a domain ================================================================================ -""" - print a.getTree() +""") + print(a.getTree()) # Print the XML representation - print """ + print(""" ================================================================================ XML: getXML() ================================================================================ -""" - print a.getXML() +""") + print(a.getXML()) # Some space - print + print() if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/paradigm.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/paradigm.py --- ../python3/nltk_contrib/nltk_contrib/misc/paradigm.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/paradigm.py (refactored) @@ -22,7 +22,7 @@ # a.setOutput('term') # output is sent to terminal from xml.dom.ext.reader import Sax2 -from paradigmquery import ParadigmQuery +from .paradigmquery import ParadigmQuery import re, os class Paradigm(object): @@ -73,9 +73,9 @@ s = "" while s != "exit": s = "exit" - try: s = raw_input(">") + try: s = input(">") except EOFError: - print s + print(s) if s == "exit": return if s == "quit": @@ -93,7 +93,7 @@ # parse the query parse = ParadigmQuery(p_string) except: - print "Could not parse query." + print("Could not parse query.") return try: @@ -103,7 +103,7 @@ if result == None: raise Error except: - print "Sorry, no result can be returned" + print("Sorry, no result can be returned") return try: @@ -111,7 +111,7 @@ if self.format == "html": output = '\n' # Include CSS if we need to - if self.css <> None: + if self.css != None: output += '\n' @@ -124,14 +124,14 @@ output = result.getText() except: output = None - print "--no output--" + print("--no output--") return # Print to terminal if output is set, otherwise to file if self.output == "term": - print output + print(output) else: - print "Output written to file:", self.output + print("Output written to file:", self.output) f = open(self.output, 'w') f.write(output) @@ -151,9 +151,9 @@ elif p_string == "text": self.format = "text" else: - print "Unknown format:", p_string - print "Valid formats are: text, html" - print "Setting format = text" + print("Unknown format:", p_string) + print("Valid formats are: text, html") + print("Setting format = text") self.format = "text" def setCSS(self, p_string=None): @@ -161,8 +161,8 @@ Set the file location for a Cascading Stylesheet: None or filename This allows for simple formatting """ - if p_string <> None: - print "Using CSS file:", p_string + if p_string != None: + print("Using CSS file:", p_string) self.output = p_string def setOutput(self, p_string=None): @@ -174,9 +174,9 @@ p_string = "term" # set to term if requested, otherwise filename if p_string == "term": - print "Directing output to terminal" + print("Directing output to terminal") else: - print "Directing output to file:", p_string + print("Directing output to file:", p_string) self.output = p_string @@ -201,7 +201,7 @@ f = open(try_filename) p_filename = try_filename except IOError: - print "Cannot find file" + print("Cannot find file") return None f.close() @@ -241,14 +241,14 @@ self.data.append(tmp_dict) # Talk to the user - print "Paradigm information successfully loaded from file:", p_filename + print("Paradigm information successfully loaded from file:", p_filename) # State the number and print out a list of attributes - print " "*4 + str(len(self.attributes)) + " attributes imported:", + print(" "*4 + str(len(self.attributes)) + " attributes imported:", end=' ') for att in self.attributes: - print att, - print + print(att, end=' ') + print() # State the number of paradigm objects imported - print " "*4 + str(len(self.data)) + " paradigm objects imported." + print(" "*4 + str(len(self.data)) + " paradigm objects imported.") return @@ -360,7 +360,7 @@ self.paradigm.attributes[self.attribute] except KeyError: self.error = "I couldn't find this attribute: " + self.attribute - print self.error + print(self.error) def __getitem__(self, p_index): return self.paradigm.attributes[self.attribute][p_index] @@ -616,10 +616,10 @@ vertical_header_rows = vertical_header.split('') cell_rows = str_cells.replace('','').split('') # Join two lists - zipped = zip(vertical_header_rows, cell_rows) + zipped = list(zip(vertical_header_rows, cell_rows)) str_zipped = "" for (header,cells) in zipped: - if header <> '': + if header != '': str_zipped += header + cells + "\n" # Return all the elements @@ -629,22 +629,22 @@ """ Return a horizontal html table (?) """ - print "?: getHorizontalHTML() called on a table." + print("?: getHorizontalHTML() called on a table.") return None def getText(self): """ Return text for this table (?) """ - print "?: getText() for a table? HAHAHAHAHA" - print "call setFormat('html') if you want to run queries like that" + print("?: getText() for a table? HAHAHAHAHA") + print("call setFormat('html') if you want to run queries like that") return def getConditions(self): """ Return conditions for this table (?) """ - print "?: getConditions() called on a table. I don't think so." + print("?: getConditions() called on a table. I don't think so.") return None def getMaxWidth(self): @@ -658,7 +658,7 @@ """ Return span for this table (?) """ - print "WTF: getSpan() called on a table." + print("WTF: getSpan() called on a table.") return None def getData(self, p_return, p_attDict): @@ -676,7 +676,7 @@ for datum in self.paradigm.data: inc = True # For each given attribute requirement - for att in p_attDict.keys(): + for att in list(p_attDict.keys()): # If the data object fails the requirement do not include if datum[att] != p_attDict[att]: inc = False @@ -704,74 +704,74 @@ If there is any key overlap, dict1 wins! (just make sure this doesn't happen) """ - for key in dict1.keys(): + for key in list(dict1.keys()): dict2[key] = dict1[key] return dict2 def demo(): # Print the query - print """ + print(""" ================================================================================ Load: Paradigm(file) ================================================================================ -""" - print - print ">>> a = Paradigm('german.xml')" - print +""") + print() + print(">>> a = Paradigm('german.xml')") + print() a = Paradigm('german.xml') - print - print ">>> a.setOutput('term')" - print + print() + print(">>> a.setOutput('term')") + print() a.setOutput('term') - print - print ">>> a.setFormat('text')" - print + print() + print(">>> a.setFormat('text')") + print() a.setFormat('text') # Print a domain - print """ + print(""" ================================================================================ Domain: case ================================================================================ -""" - print - print ">>> a.show('case')" - print +""") + print() + print(">>> a.show('case')") + print() a.show('case') # PrinRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/paradigm.py t a hierarchy - print """ + print(""" ================================================================================ Hierarchy: case/gender ================================================================================ -""" - print - print ">>> a.show('case/gender')" - print +""") + print() + print(">>> a.show('case/gender')") + print() a.show('case/gender') # Print a table - print """ + print(""" ================================================================================ Table: table(case/number,gender,content) ================================================================================ -""" - print - print ">>> a.setOutput('demo.html')" - print +""") + print() + print(">>> a.setOutput('demo.html')") + print() a.setOutput('demo.html') - print - print ">>> a.setFormat('html')" - print + print() + print(">>> a.setFormat('html')") + print() a.setFormat('html') - print - print ">>> a.show('table(case/number,gender,content)')" - print + print() + print(">>> a.show('table(case/number,gender,content)')") + print() a.show('table(case/number,gender,content)') # Some space - print + print() if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/marshalbrill.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/marshalbrill.py --- ../python3/nltk_contrib/nltk_contrib/misc/marshalbrill.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/marshalbrill.py (refactored) @@ -187,7 +187,7 @@ rule. @rtype: C{list} of C{int} """ - return self.apply_at(tokens, range(len(tokens))) + return self.apply_at(tokens, list(range(len(tokens)))) def apply_at(self, tokens, positions): """ @@ -373,7 +373,7 @@ # Needs to include extract_property in order to distinguish subclasses # A nicer way would be welcome. return hash( (self._original, self._replacement, self._conditions, - self.extract_property.func_code) ) + self.extract_property.__code__) ) def __repr__(self): conditions = ' and '.join(['%s in %d...%d' % (v,s,e) @@ -456,7 +456,7 @@ C{Brill} training algorithms to generate candidate rules. """ def __init__(self): - raise AssertionError, "BrillTemplateI is an abstract interface" + raise AssertionError("BrillTemplateI is an abstract interface") def applicable_rules(self, tokens, i, correctTag): """ @@ -478,7 +478,7 @@ @type correctTag: (any) @rtype: C{list} of L{BrillRuleI} """ - raise AssertionError, "BrillTemplateI is an abstract interface" + raise AssertionError("BrillTemplateI is an abstract interface") def get_neighborhood(self, token, index): """ @@ -494,7 +494,7 @@ @type index: C{int} @rtype: C{Set} """ - raise AssertionError, "BrillTemplateI is an abstract interface" + raise AssertionError("BrillTemplateI is an abstract interface") class ProximateTokensTemplate(BrillTemplateI): """ @@ -671,8 +671,8 @@ @param min_score: The minimum acceptable net error reduction that each transformation must produce in the corpus. """ - if self._trace > 0: print ("Training Brill tagger on %d tokens..." % - len(train_tokens)) + if self._trace > 0: print(("Training Brill tagger on %d tokens..." % + len(train_tokens))) # Create a new copy of the training token, and run the initial # tagger on this. We will progressively update this test @@ -691,7 +691,7 @@ train_tokens) if rule is None or score < min_score: if self._trace > 1: - print 'Insufficient improvement; stopping' + print('Insufficient improvement; stopping') break else: # Add the rule to our list of rules. @@ -746,7 +746,7 @@ # once for each tag that the rule changes to an incorrect # value. score = fixscore - if correct_indices.has_key(rule.original_tag()): + if rule.original_tag() in correct_indices: for i in correct_indices[rule.original_tag()]: if rule.applies(test_tokens, i): score -= 1 @@ -791,7 +791,7 @@ # Convert the dictionary into a list of (rule, score) tuples, # sorted in descending order of score. - rule_score_items = rule_score_dict.items() + rule_score_items = list(rule_score_dict.items()) temp = [(-score, rule) for (rule, score) in rule_score_items] temp.sort() return [(rule, -negscore) for (negscore, rule) in temp] @@ -818,7 +818,7 @@ #//////////////////////////////////////////////////////////// def _trace_header(self): - print """ + print(""" B | S F r O | Score = Fixed - Broken c i o t | R Fixed = num tags changed incorrect -> correct @@ -826,13 +826,13 @@ r e e e | l Other = num tags changed incorrect -> incorrect e d n r | e ------------------+------------------------------------------------------- - """.rstrip() + """.rstrip()) def _trace_rule(self, rule, score, fixscore, numchanges): if self._trace > 2: - print ('%4d%4d%4d%4d ' % (score, fixscore, fixscore-score, - numchanges-fixscore*2+score)), '|', - print rule + print(('%4d%4d%4d%4d ' % (score, fixscore, fixscore-score, + numchanges-fixscore*2+score)), '|', end=' ') + print(rule) ###################################################################### ## Fast Brill Tagger Trainer @@ -899,7 +899,7 @@ # If the rule is already known to apply here, ignore. # (This only happens if the position's tag hasn't changed.) - if positionsByRule[rule].has_key(i): + if i in positionsByRule[rule]: return if rule.replacement_tag() == train_tokens[i][1]: @@ -912,7 +912,7 @@ # Update rules in the other dictionaries del rulesByScore[ruleScores[rule]][rule] ruleScores[rule] += positionsByRule[rule][i] - if not rulesByScore.has_key(ruleScores[rule]): + if ruleScores[rule] not in rulesByScore: rulesByScore[ruleScores[rule]] = {} rulesByScore[ruleScores[rule]][rule] = None rulesByPosition[i].add(rule) @@ -922,7 +922,7 @@ def _updateRuleNotApplies (rule, i): del rulesByScore[ruleScores[rule]][rule] ruleScores[rule] -= positionsByRule[rule][i] - if not rulesByScore.has_key(ruleScores[rule]): + if ruleScores[rule] not in rulesByScore: rulesByScore[ruleScores[rule]] = {} rulesByScore[ruleScores[rule]][rule] = None @@ -939,22 +939,22 @@ tag = tagged_tokens[i][1] if tag != train_tokens[i][1]: errorIndices.append(i) - if not tagIndices.has_key(tag): + if tag not in tagIndices: tagIndices[tag] = [] tagIndices[tag].append(i) - print "Finding useful rules..." + print("Finding useful rules...") # Collect all rules that fix any errors, with their positive scores. for i in errorIndices: for template in self._templates: # Find the templated rules that could fix the error. for rule in template.applicable_rules(tagged_tokens, i, train_tokens[i][1]): - if not positionsByRule.has_key(rule): + if rule not in positionsByRule: _initRule(rule) _updateRuleApplies(rule, i) - print "Done initializing %i useful rules." %len(positionsByRule) + print("Done initializing %i useful rules." %len(positionsByRule)) if TESTING: after = -1 # bug-check only @@ -973,7 +973,7 @@ # best rule. bestRule = None - bestRules = rulesByScore[maxScore].keys() + bestRules = list(rulesByScore[maxScore].keys()) for rule in bestRules: # Find the first relevant index at or following the first @@ -990,7 +990,7 @@ # If we checked all remaining indices and found no more errors: if ruleScores[rule] == maxScore: firstUnknownIndex[rule] = len(tagged_tokens) # i.e., we checked them all - print "%i) %s (score: %i)" %(len(rules)+1, rule, maxScore) + print("%i) %s (score: %i)" %(len(rules)+1, rule, maxScore)) bestRule = rule break @@ -1002,29 +1002,29 @@ # bug-check only if TESTING: before = len(_errorPositions(tagged_tokens, train_tokens)) - print "There are %i errors before applying this rule." %before + print("There are %i errors before applying this rule." %before) assert after == -1 or before == after, \ "after=%i but before=%i" %(after,before) - print "Applying best rule at %i locations..." \ - %len(positionsByRule[bestRule].keys()) + print("Applying best rule at %i locations..." \ + %len(list(positionsByRule[bestRule].keys()))) # If we reach this point, we've found a new best rule. # Apply the rule at the relevant sites. # (apply_at is a little inefficient here, since we know the rule applies # and don't actually need to test it again.) rules.append(bestRule) - bestRule.apply_at(tagged_tokens, positionsByRule[bestRule].keys()) + bestRule.apply_at(tagged_tokens, list(positionsByRule[bestRule].keys())) # Update the tag index accordingly. - for i in positionsByRule[bestRule].keys(): # where it applied + for i in list(positionsByRule[bestRule].keys()): # where it applied # Update positions of tags # First, find and delete the index for i from the old tag. oldIndex = bisect.bisect_left(tagIndices[bestRule.original_tag()], i) del tagIndices[bestRule.original_tag()][oldIndex] # Then, insert i into the index list of the new tag. - if not tagIndices.has_key(bestRule.replacement_tag()): + if bestRule.replacement_tag() not in tagIndices: tagIndices[bestRule.replacement_tag()] = [] newIndex = bisect.bisect_left(tagIndices[bestRule.replacement_tag()], i) tagIndices[bestRule.replacement_tag()].insert(newIndex, i) @@ -1037,11 +1037,11 @@ # # If a template now generates a different set of rules, we have # to update our indices to reflect that. - print "Updating neighborhoods of changed sites.\n" + print("Updating neighborhoods of changed sites.\n") # First, collect all the indices that might get new rules. neighbors = set() - for i in positionsByRule[bestRule].keys(): # sites changed + for i in list(positionsByRule[bestRule].keys()): # sites changed for template in self._templates: neighbors.update(template.get_neighborhood(tagged_tokens, i)) @@ -1062,21 +1062,21 @@ # Update rules only now generated by this template for newRule in siteRules - rulesByPosition[i]: d += 1 - if not positionsByRule.has_key(newRule): + if newRule not in positionsByRule: e += 1 _initRule(newRule) # make a new rule w/score=0 _updateRuleApplies(newRule, i) # increment score, etc. if TESTING: after = before - maxScore - print "%i obsolete rule applications, %i new ones, " %(c,d)+ \ - "using %i previously-unseen rules." %e + print("%i obsolete rule applications, %i new ones, " %(c,d)+ \ + "using %i previously-unseen rules." %e) maxScore = max(rulesByScore.keys()) # may have gone up - if self._trace > 0: print ("Training Brill tagger on %d tokens..." % - len(train_tokens)) + if self._trace > 0: print(("Training Brill tagger on %d tokens..." % + len(train_tokens))) # Maintain a list of the rules that apply at each position. rules_by_position = [{} for tok in train_tokens] @@ -1164,7 +1164,7 @@ # train is the proportion of data used in training; the rest is reserved # for testing. - print "Loading tagged data..." + print("Loading tagged data...") sents = [] for item in treebank.items: sents.extenRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/marshalbrill.py d(treebank.tagged(item)) @@ -1182,13 +1182,13 @@ # Unigram tagger - print "Training unigram tagger:", + print("Training unigram tagger:", end=' ') u = tag.Unigram(backoff=NN_CD_tagger) # NB training and testing are required to use a list-of-lists structure, # so we wrap the flattened corpus data with the extra list structure. u.train([training_data]) - print("[accuracy: %f]" % tag.accuracy(u, [gold_data])) + print(("[accuracy: %f]" % tag.accuracy(u, [gold_data]))) # Brill tagger @@ -1209,13 +1209,13 @@ trainer = brill.BrillTrainer(u, templates, trace) b = trainer.train(training_data, max_rules, min_score) - print - print("Brill accuracy: %f" % tag.accuracy(b, [gold_data])) + print() + print(("Brill accuracy: %f" % tag.accuracy(b, [gold_data]))) print("\nRules: ") printRules = file(rule_output, 'w') for rule in b.rules(): - print(str(rule)) + print((str(rule))) printRules.write(str(rule)+"\n\n") testing_data = list(b.tag(testing_data)) @@ -1225,7 +1225,7 @@ for e in el: errorFile.write(e+"\n\n") errorFile.close() - print "Done; rules and errors saved to %s and %s." % (rule_output, error_output) + print("Done; rules and errors saved to %s and %s." % (rule_output, error_output)) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/marshal.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/marshal.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/marshal.py --- ../python3/nltk_contrib/nltk_contrib/misc/marshal.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/marshal.py (refactored) @@ -56,7 +56,7 @@ """ handler = file(filename, "w") - for text, tag in self._model.iteritems(): + for text, tag in self._model.items(): handler.write("%s:%s\n" % (text, tag)) handler.close() @@ -97,7 +97,7 @@ handler.write("length %i\n" % self._length) handler.write("minlength %i\n" % self._minlength) - for text, tag in self._model.iteritems(): + for text, tag in self._model.items(): handler.write("%s:%s\n" % (text, tag)) handler.close() @@ -203,4 +203,4 @@ #tagger.marshal("ngram.test") tagger.unmarshal("ngram.test") - print tagger._model + print(tagger._model) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/lex.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/lex.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/lex.py --- ../python3/nltk_contrib/nltk_contrib/misc/lex.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/lex.py (refactored) @@ -31,7 +31,7 @@ """ Output 'phon' values in 'stem + affix' notation. """ - return dumper.represent_scalar(u'!phon', u'%s + %s' % \ + return dumper.represent_scalar('!phon', '%s + %s' % \ (data['stem'], data['affix'])) yaml.add_representer(Phon, phon_representer) @@ -61,7 +61,7 @@ stem, affix = [normalize(s) for s in value.split('+')] return Phon(stem, affix) -yaml.add_constructor(u'!phon', phon_constructor) +yaml.add_constructor('!phon', phon_constructor) #following causes YAML to barf for some reason: #pattern = re.compile(r'^(\?)?\w+\s*\+\s*(\?)?\w+$') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/langid.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/langid.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/langid.py --- ../python3/nltk_contrib/nltk_contrib/misc/langid.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/langid.py (refactored) @@ -25,7 +25,7 @@ cls = classifier.get_class(gold_data[lang]) if cls == lang: correct += 1 - print correct, "in", len(gold_data), "correct" + print(correct, "in", len(gold_data), "correct") # features: character bigrams fd = detect.feature({"char-bigrams" : lambda t: [string.join(t)[n:n+2] for n in range(len(t)-1)]}) @@ -36,11 +36,11 @@ gold_data[lang] = training_data[lang][:50] training_data[lang] = training_data[lang][100:200] -print "Cosine classifier: ", +print("Cosine classifier: ", end=' ') run(classify.Cosine(fd), training_data, gold_data) -print "Naivebayes classifier: ", +print("Naivebayes classifier: ", end=' ') run(classify.NaiveBayes(fd), training_data, gold_data) -print "Spearman classifier: ", +print("Spearman classifier: ", end=' ') run(classify.Spearman(fd), training_data, gold_data) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/kimmo.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/kimmo.py --- ../python3/nltk_contrib/nltk_contrib/misc/kimmo.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/kimmo.py (refactored) @@ -16,7 +16,7 @@ # TODO: remove Unix dependencies -import Tkinter +import tkinter import os, re, sys, types, string, glob, time, md5 from nltk_contrib.fsa import * @@ -37,7 +37,7 @@ and we want batch mode, big file, or big input test with output. """ ########################################################################### -from ScrolledText import ScrolledText +from tkinter.scrolledtext import ScrolledText class KimmoGUI: def __init__(self, grammar, text, title='Kimmo Interface v1.78'): @@ -58,46 +58,46 @@ self.helpFilename = 'kimmo.help' - self._root = Tkinter.Tk() + self._root = tkinter.Tk() self._root.title(title) - ctlbuttons = Tkinter.Frame(self._root) + ctlbuttons = tkinter.Frame(self._root) ctlbuttons.pack(side='top', fill='x') - level1 = Tkinter.Frame(self._root) + level1 = tkinter.Frame(self._root) level1.pack(side='top', fill='none') - Tkinter.Frame(self._root).pack(side='top', fill='none') - level2 = Tkinter.Frame(self._root) + tkinter.Frame(self._root).pack(side='top', fill='none') + level2 = tkinter.Frame(self._root) level2.pack(side='top', fill='x') - buttons = Tkinter.Frame(self._root) + buttons = tkinter.Frame(self._root) buttons.pack(side='top', fill='none') - batchFrame = Tkinter.Frame(self._root) + batchFrame = tkinter.Frame(self._root) batchFrame.pack(side='top', fill='x') - self.batchpath = Tkinter.StringVar() - Tkinter.Label(batchFrame, text="Batch File:").pack(side='left') - Tkinter.Entry(batchFrame, background='white', foreground='black', + self.batchpath = tkinter.StringVar() + tkinter.Label(batchFrame, text="Batch File:").pack(side='left') + tkinter.Entry(batchFrame, background='white', foreground='black', width=30, textvariable=self.batchpath).pack(side='left') - Tkinter.Button(batchFrame, text='Go!', + tkinter.Button(batchFrame, text='Go!', background='#a0c0c0', foreground='black', command=self.batch).pack(side='left') - self.debugWin = Tkinter.StringVar() # change to a window and field eventually. - Tkinter.Entry(batchFrame, background='grey', foreground='red', + self.debugWin = tkinter.StringVar() # change to a window and field eventually. + tkinter.Entry(batchFrame, background='grey', foreground='red', width=30, textvariable=self.debugWin).pack(side='right') - self.wordIn = Tkinter.StringVar() - Tkinter.Label(level2, text="Generate or Recognize:").pack(side='left') - Tkinter.Entry(level2, background='white', foreground='black', + self.wordIn = tkinter.StringVar() + tkinter.Label(level2, text="Generate or Recognize:").pack(side='left') + tkinter.Entry(level2, background='white', foreground='black', width=30, textvariable=self.wordIn).pack(side='left') - lexiconFrame = Tkinter.Frame(level1) - Tkinter.Label(lexiconFrame, text="Lexicon & Alternations").pack(side='top', + lexiconFrame = tkinter.Frame(level1) + tkinter.Label(lexiconFrame, text="Lexicon & Alternations").pack(side='top', fill='x') self.lexicon = ScrolledText(lexiconFrame, background='white', foreground='black', width=50, height=36, wrap='none') # setup the scrollbar - scroll = Tkinter.Scrollbar(lexiconFrame, orient='horizontal',command=self.lexicon.xview) + scroll = tkinter.Scrollbar(lexiconFrame, orient='horizontal',command=self.lexicon.xview) scroll.pack(side='bottom', fill='x') self.lexicon.configure(xscrollcommand = scroll.set) @@ -105,36 +105,36 @@ self.lexicon.pack(side='top') - midFrame = Tkinter.Frame(level1) - rulesFrame = Tkinter.Frame(midFrame) + midFrame = tkinter.Frame(level1) + rulesFrame = tkinter.Frame(midFrame) rulesFrame.pack(side='top', fill='x') - Tkinter.Label(rulesFrame, text="Rules/Subsets").pack(side='top', + tkinter.Label(rulesFrame, text="Rules/Subsets").pack(side='top', fill='x') self.rules = ScrolledText(rulesFrame, background='white', foreground='black', width=60, height=19, wrap='none') # setup the scrollbar - scroll = Tkinter.Scrollbar(rulesFrame, orient='horizontal',command=self.rules.xview) + scroll = tkinter.Scrollbar(rulesFrame, orient='horizontal',command=self.rules.xview) scroll.pack(side='bottom', fill='x') self.rules.configure(xscrollcommand = scroll.set) self.rules.pack(side='top') - midbetweenFrame = Tkinter.Frame(midFrame) + midbetweenFrame = tkinter.Frame(midFrame) midbetweenFrame.pack(side='top', fill='x') - Tkinter.Button(midbetweenFrame, text='clear', + tkinter.Button(midbetweenFrame, text='clear', background='#f0f0f0', foreground='black', - command= lambda start=1.0, end=Tkinter.END : self.results.delete(start,end) + command= lambda start=1.0, end=tkinter.END : self.results.delete(start,end) ).pack(side='right') - Tkinter.Label(midbetweenFrame, + tkinter.Label(midbetweenFrame, text="Results ").pack(side='right') self.results = ScrolledText(midFrame, background='white', foreground='black', width=60, height=13, wrap='none') # setup the scrollbar - scroll = Tkinter.Scrollbar(midFrame, orient='horizontal',command=self.results.xview) + scroll = tkinter.Scrollbar(midFrame, orient='horizontal',command=self.results.xview) scroll.pack(side='bottom', fill='x') self.results.configure(xscrollcommand = scroll.set) @@ -151,13 +151,13 @@ self.alternation.pack(side='top') """ - Tkinter.Button(ctlbuttons, text='Quit', + tkinter.Button(ctlbuttons, text='Quit', background='#a0c0c0', foreground='black', command=self.destroy).pack(side='left') - self.loadMenuButton = Tkinter.Menubutton(ctlbuttons, text='Load', background='#a0c0c0', foreground='black', relief='raised') + self.loadMenuButton = tkinter.Menubutton(ctlbuttons, text='Load', background='#a0c0c0', foreground='black', relief='raised') self.loadMenuButton.pack(side='left') - self.loadMenu=Tkinter.Menu(self.loadMenuButton,tearoff=0) + self.loadMenu=tkinter.Menu(self.loadMenuButton,tearoff=0) self.loadMenu.add_command(label='Load Lexicon', underline=0,command = lambda filetype='.lex', targetWindow = self.lexicon, tf = 'l' : self.loadTypetoTarget(filetype, targetWindow, tf)) self.loadMenu.add_command(label='Load Rules', underline=0,command = lambda filetype='.rul', targetWindow = self.rules, tf = 'r' : self.loadTypetoTarget(filetype, targetWindow, tf)) @@ -166,9 +166,9 @@ # - self.saveMenuButton = Tkinter.Menubutton(ctlbuttons, text='Save',background='#a0c0c0', foreground='black', relief='raised') + self.saveMenuButton = tkinter.Menubutton(ctlbuttons, text='Save',background='#a0c0c0', foreground='black', relief='raised') self.saveMenuButton.pack(side='left') - self.saveMenu=Tkinter.Menu(self.saveMenuButton,tearoff=0) + self.saveMenu=tkinter.Menu(self.saveMenuButton,tearoff=0) self.saveMenu.add_command(label='Save Lexicon', underline=0,command = lambda filename=self.lexfilename, sourceWindow = self.lexicon : self.writeToFilefromWindow(filename, sourceWindow,'w',0,'l')) self.saveMenu.add_command(label='Save Rules', underline=0,command = lambda filename=self.rulfilename, sourceWindow = self.rules : self.writeToFilefromWindow(filename, sourceWindow,'w',0,'r')) self.saveMenu.add_command(label='Save Results', underline=0,command = lambda filename='.results', sourceWindow = self.results : self.writeToFilefromWindow(filename, sourceWindow,'w',0)) @@ -176,12 +176,12 @@ self.saveMenuButton["menu"]=self.saveMenu - Tkinter.Label(ctlbuttons, text=" Preset:").pack(side='left') - - self.configValue = Tkinter.StringVar() - self.configsMenuButton = Tkinter.Menubutton(ctlbuttons, text='Configs', background='#a0c0c0', foreground='black', relief='raised') + tkinter.Label(ctlbuttons, text=" Preset:").pack(side='left') + + self.configValue = tkinter.StringVar() + self.configsMenuButton = tkinter.Menubutton(ctlbuttons, text='Configs', background='#a0c0c0', foreground='black', relief='raised') self.configsMenuButton.pack(side='left') - self.configsMenu=Tkinter.Menu(self.configsMenuButton,tearoff=0) + self.configsMenu=tkinter.Menu(self.configsMenuButton,tearoff=0) # read the directory for cfgs, add them to the menu # add path expander, to expand ~ & given home dirs. @@ -210,21 +210,21 @@ # background='#b0f0d0', foreground='#008b45', # command=self.generate).pack(side='right') - self.tracingbtn = Tkinter.Button(ctlbuttons, text='Tracing', + self.tracingbtn = tkinter.Button(ctlbuttons, text='Tracing', background='#fff0f0', foreground='black', command=lambda : self.create_destroyDebugTracing()).pack(side='right') - self.graphMenuButton = Tkinter.Menubutton(ctlbuttons, text='Graph', background='#d0d0e8', foreground='black', relief='raised') + self.graphMenuButton = tkinter.Menubutton(ctlbuttons, text='Graph', background='#d0d0e8', foreground='black', relief='raised') self.graphMenuButton.pack(side='right') - self.graphMenu=Tkinter.Menu(self.graphMenuButton,tearoff=0) + self.graphMenu=tkinter.Menu(self.graphMenuButton,tearoff=0) self.graphMenu.add_command(label='Graph Lexicon', underline=0,command = lambda which = 'l' : self.graph(which)) self.graphMenu.add_command(label='Graph FSA Rules', underline=0,command = lambda which = 'r' : self.graph(which)) # self.loadMenu.add_command(label='Load Lexicon', underline=0,command = lambda filetype='.lex', targetWindow = self.lexicon : loadTypetoTarget(self, filetype, targetWindow)) self.graphMenuButton["menu"]=self.graphMenu - self.helpbtn = Tkinter.Button(ctlbuttons, text='Help', + self.helpbtn = tkinter.Button(ctlbuttons, text='Help', background='#f0fff0', foreground='black', command=self.kimmoHelp).pack(side='right') @@ -233,10 +233,10 @@ midFrame.pack(side='left') # alternationFrame.pack(side='left') - Tkinter.Button(level2, text='Generate', + tkinter.Button(level2, text='Generate', background='#a0c0c0', foreground='black', command=self.generate).pack(side='left') - Tkinter.Button(level2, text='Recognize', + tkinter.Button(level2, text='Recognize', background='#a0c0c0', foreground='black', command=self.recognize).pack(side='left') @@ -267,16 +267,16 @@ # Enter mainloop. - Tkinter.mainloop() + tkinter.mainloop() except: - print 'Error creating Tree View' + print('Error creating Tree View') self.destroy() raise def init_menubar(self): - menubar = Tkinter.Menu(self._root) - - filemenu = Tkinter.Menu(menubar, tearoff=0) + menubar = tkinter.Menu(self._root) + + filemenu = tkinter.Menu(menubar, tearoff=0) filemenu.add_command(label='Save Rules', underline=0, command=self.save, accelerator='Ctrl-s') self._root.bind('', self.save) @@ -308,26 +308,26 @@ else: try: # have in its own special di decial class - self.dbgTracing = Tkinter.Toplevel() + self.dbgTracing = tkinter.Toplevel() self.dbgTracing.title("Tracing/Debug") - dbgTraceFrame2 = Tkinter.Frame(self.dbgTracing) + dbgTraceFrame2 = tkinter.Frame(self.dbgTracing) dbgTraceFrame2.pack(side='top', fill='x') - dbgTraceFrame = Tkinter.Frame(self.dbgTracing) + dbgTraceFrame = tkinter.Frame(self.dbgTracing) dbgTraceFrame.pack(side='top', fill='x',expand='yes') self.traceWindow = ScrolledText(dbgTraceFrame, background='#f4f4f4', foreground='#aa0000', width=45, height=24, wrap='none') - Tkinter.Button(dbgTraceFrame2, text='clear', + tkinter.Button(dbgTraceFrame2, text='clear', background='#a0c0c0', foreground='black', - command= lambda start=1.0, end=Tkinter.END : self.traceWindow.delete(start,end) + command= lambda start=1.0, end=tkinter.END : self.traceWindow.delete(start,end) ).pack(side='right') - Tkinter.Button(dbgTraceFrame2, text='Save', + tkinter.Button(dbgTraceFrame2, text='Save', background='#a0c0c0', foreground='black', command= lambda file=self.kimmoResultFile,windowName=self.traceWindow,mode='w',auto=0 : self.writeToFilefromWindow(file,windowName,mode,auto) ).pack(side='left') - scroll = Tkinter.Scrollbar(dbgTraceFrame, orient='horizontal',command=self.traceWindow.xview) + scroll = tkinter.Scrollbar(dbgTraceFrame, orient='horizontal',command=self.traceWindow.xview) scroll.pack(side='bottom', fill='x') self.traceWindow.configure(xscrollcommand = scroll.set) @@ -340,7 +340,7 @@ self.dbgTracing.protocol("WM_DELETE_WINDOW", self.create_destroyDebugTracing) except: - print 'Error creating Tree View' + print('Error creating Tree View') self.dbgTracing.destroy() self.dbgTracing = None self.debug = False @@ -355,7 +355,7 @@ if not (auto and windowName and filename): - from tkFileDialog import asksaveasfilename + from tkinter.filedialog import asksaveasfilename ftypes = [('Text file', '.txt'),('Rule file', '.rul'),('Lexicon file', '.lex'),('Alternations file', '.alt'), ('All files', '*')] filename = asksaveasfilename(filetypes=ftypes, @@ -365,7 +365,7 @@ self.guiError('Need File Name') return f = open(filename, 'w') - f.write(windowName.get(1.0,Tkinter.END)) + f.write(windowName.get(1.0,tkinter.END)) f.close() if filename: @@ -401,7 +401,7 @@ """ def configLoader(self,*args): - print args[0] + print(args[0]) filename = args[0] # if arg is a valid file, load by line. @@ -471,7 +471,7 @@ text.append(line) # empty the window now that the file was valid - windowField.delete(1.0, Tkinter.END) + windowField.delete(1.0, tkinter.END) windowField.insert(1.0, '\n'.join(text)) @@ -483,7 +483,7 @@ if not (fileType and targetWindow): return - from tkFileDialog import askopenfilename + from tkinter.filedialog import askopenfilename ftypes = [(fileType, fileType)] filename = askopenfilename(filetypes=ftypes, defaultextension=fileType) @@ -502,7 +502,7 @@ # graphical interface to file loading. "Load rule/lexicon set from a text file" - from tkFileDialog import askopenfilename + from tkinter.filedialog import askopenfilename ftypes = [('Text file', '.txt'), ('All files', '*')] # filename = askopenfilename(filetypes=ftypes, defaultextension='.txt') @@ -556,10 +556,10 @@ def clear(self, *args): "Clears the grammar and lexical and sentence inputs" - self.lexicon.delete(1.0, Tkinter.END) - self.rules.delete(1.0, Tkinter.END) + self.lexicon.delete(1.0, tkinter.END) + self.rules.delete(1.0, tkinter.END) # self.alternation.delete(1.0, Tkinter.END) - self.results.delete(1.0, Tkinter.END) + self.results.delete(1.0, tkinter.END) def destroy(self, *args): if self._root is None: return @@ -570,10 +570,10 @@ # for single stepping through a trace. # need to make the kimmo class capable of being interrupted & resumed. def step(self, *args): - print 'a' + print('a') def singlestep(self, *args): - print 'a' + print('a') def batch(self, *args): filename = self.batchpath.get() @@ -704,10 +704,10 @@ # check & set path, if necessary, need read and write access to path path = '' pathstatus = os.stat('./') # 0600 is r/w, binary evaluation - if not ((pathstatus[0] & 0600) == 0600): + if not ((pathstatus[0] & 0o600) == 0o600): path = '/tmp/' + str(os.environ.get("USER")) + '/' # need terminating / if not os.path.exists(path): - os.mkdir(path,0777) + os.mkdir(path,0o777) pathre = re.compile(r"^.*\/") @@ -779,7 +779,7 @@ matchIdx = '1.0' matchRight = '1.0' while matchIdx != '': - matchIdx = window.search(word,matchRight,count=1,stopindex=Tkinter.END) + matchIdx = window.search(word,matchRight,count=1,stopindex=tkinter.END) if matchIdx == '': break strptr = matchIdx.split(".") @@ -799,11 +799,11 @@ or recognize. (i.e. loading all rules, lexicon, and alternations """ # only initialize Kimmo if the contents of the *rules* have changed - tmprmd5 = md5.new(self.rules.get(1.0, Tkinter.END)) - tmplmd5 = md5.new(self.lexicon.get(1.0, Tkinter.END)) + tmprmd5 = md5.new(self.rules.get(1.0, tkinter.END)) + tmplmd5 = md5.new(self.lexicon.get(1.0, tkinter.END)) if (not self.kimmoinstance) or (self.rulemd5 != tmprmd5) or (self.lexmd5 != tmplmd5): self.guiError("Creating new Kimmo instance") - self.kimmoinstance = KimmoControl(self.lexicon.get(1.0, Tkinter.END),self.rules.get(1.0, Tkinter.END),'','',self.debug) + self.kimmoinstance = KimmoControl(self.lexicon.get(1.0, tkinter.END),self.rules.get(1.0, tkinter.END),'','',self.debug) self.guiError("") self.rulemd5 = tmprmd5 self.lexmd5 = tmplmd5 @@ -820,7 +820,7 @@ def refresh(self, *args): if self._root is None: return - print self.wordIn.get() + print(self.wordIn.get()) # CAPTURE PYTHON-KIMMO OUTPUT @@ -830,8 +830,8 @@ # if there is a trace/debug window if self.dbgTracing: - self.traceWindow.insert(Tkinter.END, string.join(args," ")) - self.traceWindow.see(Tkinter.END) + self.traceWindow.insert(tkinter.END, string.join(args," ")) + self.traceWindow.see(tkinter.END) # otherwise, just drop the output. @@ -858,7 +858,7 @@ # helpText = Tkinter.StringVar() helpText = '' try: f = open(self.helpFilename, 'r') - except IOError, e: + except IOError as e: self.guiError("HelpFile not loaded") return @@ -873,7 +873,7 @@ helpText = re.sub("\r","",helpText) - helpWindow = Tkinter.Toplevel() + helpWindow = tkinter.Toplevel() helpWindow.title("PyKimmo Documentation & Help") # help = Tkinter.Label(helpWindow,textvariable=helpText, justify='left' ) # @@ -884,14 +884,14 @@ help.pack(side='top') help.insert(1.0, helpText) # setup the scrollbar - scroll = Tkinter.Scrollbar(helpWindow, orient='horizontal',command=help.xview) + scroll = tkinter.Scrollbar(helpWindow, orient='horizontal',command=help.xview) scroll.pack(side='bottom', fill='x') help.configure(xscrollcommand = scroll.set) # now highlight up the file - matchIdx = Tkinter.END - matchRight = Tkinter.END - matchLen = Tkinter.IntVar() + matchIdx = tkinter.END + matchRight = tkinter.END + matchLen = tkinter.IntVar() tagId = 1 while 1: matchIdx = help.search(r"::[^\n]*::",matchIdx, stopindex=1.0, backwards=True, regexp=True, count=matchLen ) @@ -900,7 +900,7 @@ matchIdxFields = matchIdx.split(".") matchLenStr = matchIdxFields[0] + "." + str(string.atoi(matchIdxFields[1],10) + matchLen.get()) - print (matchIdx, matchLenStr) + print((matchIdx, matchLenStr)) help.tag_add(tagId, matchIdx, matchLenStr ) help.tag_configure(tagId, background='aquamarine', foreground='blue', underline=True) tagId += 1 @@ -974,11 +974,11 @@ class tkImageView: def __init__(self, imagefileName, title): - self._root = Tkinter.Toplevel() + self._root = tkinter.Toplevel() self._root.title(title + ' (' + imagefileName + ')') - self.image = Tkinter.PhotoImage("LGraph",file=imagefileName) - - Tkinter.Label(self._root, image=self.image).pack(side='top',fill='x') + self.image = tkinter.PhotoImage("LGraph",file=imagefileName) + + tkinter.Label(self._root, image=self.image).pack(side='top',fill='x') # self._root.mainloop() def destroy(self, *args): @@ -989,11 +989,11 @@ ######################### Dialog Boxes ############################## -class ListDialog(Tkinter.Toplevel): +class ListDialog(tkinter.Toplevel): def __init__(self, parent, listOptions, title = None): - Tkinter.Toplevel.__init__(self, parent) + tkinter.Toplevel.__init__(self, parent) self.transient(parent) if title: @@ -1003,13 +1003,13 @@ self.result = None - body = Tkinter.Frame(self) + body = tkinter.Frame(self) self.initial_focus = self.body(body) body.pack(padx=5, pady=5) - box = Tkinter.Frame(self) - Tkinter.Label(box,text="Select an FSA to graph").pack(side='top',fill='x') + box = tkinter.Frame(self) + tkinter.Label(box,text="Select an FSA to graph").pack(side='top',fill='x') box.pack() @@ -1043,13 +1043,13 @@ def listbox(self, listOptions): - box = Tkinter.Frame(self) - self.lb = Tkinter.Listbox(box,height=len(listOptions),width=30,background='#f0f0ff', selectbackground='#c0e0ff' + box = tkinter.Frame(self) + self.lb = tkinter.Listbox(box,height=len(listOptions),width=30,background='#f0f0ff', selectbackground='#c0e0ff' ,selectmode='single') self.lb.pack() for x in listOptions: - self.lb.insert(Tkinter.END,x) + self.lb.insert(tkinter.END,x) box.pack() @@ -1057,11 +1057,11 @@ # add standard button box. override if you don't want the # standard buttons - box = Tkinter.Frame(self) - - w = Tkinter.Button(box, text="OK", width=10, command=self.ok, default="active") + box = tkinter.Frame(self) + + w = tkinter.Button(box, text="OK", width=10, command=self.ok, default="active") w.pack(side="left", padx=5, pady=5) - w = Tkinter.Button(box, text="Cancel", width=10, command=self.cancel) + w = tkinter.Button(box, text="Cancel", width=10, command=self.cancel) w.pack(side="left", padx=5, pady=5) self.bind("<Return>", self.ok) @@ -1245,15 +1245,15 @@ self.s = KimmoRuleSet(self.ksubsets, self.kdefaults, self.krules) self.s.debug = debug self.ok = 1 - except RuntimeError, e: + except RuntimeError as e: self.errors = ('Caught:' + str(e) + ' ' + self.errors) - print 'Caught:', e - print "Setup of the kimmoinstance failed. Most likely cause" - print "is infinite recursion due to self-referential lexicon" - print "For instance:" - print "Begin: Begin Noun End" - print "Begin is pointing to itself. Simple example, but check" - print "to insure no directed loops" + print('Caught:', e) + print("Setup of the kimmoinstance failed. Most likely cause") + print("is infinite recursion due to self-referential lexicon") + print("For instance:") + print("Begin: Begin Noun End") + print("Begin is pointing to itself. Simple example, but check") + print("to insure no directed loops") self.ok = 0 @@ -1313,8 +1313,8 @@ results_string += (batch_result_str) # place a separator between results - print '----- '+ time.strftime("%a, %d %b %Y %I:%M %p", time.gmtime()) +' -----\n' - print results_string + print('----- '+ time.strftime("%a, %d %b %Y %I:%M %p", time.gmtime()) +' -----\n') + print(results_string) @@ -2213,7 +2213,7 @@ def name(self): return self._name def pairs(self): return self._pairs def start(self): return self._state_descriptions[0][0] - def is_state(self, index): return self.transitions.has_key(index) + def is_state(self, index): return index in self.transitions def contains_final(self, indices): @@ -2283,7 +2283,7 @@ # print 'any state match' # {col num, next state num (0 if fail), is final state} # if transition row is valid - if self.transitions.has_key(self.transitions[index][i]): ft = self.is_final[self.transitions[index][i]] + if self.transitions[index][i] in self.transitions: ft = self.is_final[self.transitions[index][i]] else : ft = '' any_next_states_ary.append([ i, self.transitions[index][i], ft, pair.__repr__() ] ) if not any_next_state: @@ -2297,7 +2297,7 @@ # times? (i.e. our state is already in next_state next_state_isset = 1 next_state = self.transitions[index][i] - if self.transitions.has_key(next_state): + if next_state in self.transitions: if not(next_state in next_states): next_states.append(next_state) @@ -2349,12 +2349,12 @@ for w in words: if len(w.letters()) <= word_position: continue fc = w.letters()[word_position] - if first_chars.has_key(fc): + if fc in first_chars: first_chars[fc].append(w) else: first_chars[fc] = [ w ] sub_tries = [] - for c, sub_words in first_chars.items(): + for c, sub_words in list(first_chars.items()): sub_tries.append( (c, self.build_trie(sub_words, word_position+1)) ) return ( [w for w in words if len(w.letters()) == word_position], sub_tries ) @@ -2410,12 +2410,12 @@ # print 'current alternation: ' + name if name == None: return [] - elif self.alternations.has_key(name): + elif name in self.alternations: result = [] for ln in self.alternations[name].lexicon_names(): result.extend(self._collect(ln)) return result - elif self.lexicons.has_key(name): + elif name in self.lexicons: return [ self.lexicons[name] ] else: # raise ValueError('no lexicon or alternation named ' + name) @@ -2502,21 +2502,21 @@ padstring = '' for x in range(position): padstring = padstring + ' ' - print '%s%d %s:%s \n' % (padstring, position, this_input, this_output), - print '%s%d: Input: ' % (padstring, position,), + print('%s%d %s:%s \n' % (padstring, position, this_input, this_output), end=' ') + print('%s%d: Input: ' % (padstring, position,), end=' ') for i in input: - print ' ' + i + ' ', + print(' ' + i + ' ', end=' ') if this_input: - print '[' + this_input + ']...', - print - - - print '%s%d> Output: ' % (padstring, position,), + print('[' + this_input + ']...', end=' ') + print() + + + print('%s%d> Output: ' % (padstring, position,), end=' ') for o in output: - print ' ' + o + ' ', + print(' ' + o + ' ', end=' ') if this_output: - print '<' + this_output + '>...', - print + print('<' + this_output + '>...', end=' ') + print() # for (start, rule, fsa_states, required_truth_value) in rule_states: @@ -2524,7 +2524,7 @@ if False: # morphological_state: - print ' possible input chars = %s' % invert.possible_next_characters(morphological_state) + print(' possible input chars = %s' % invert.possible_next_characters(morphological_state)) # print morphological_state @@ -2548,7 +2548,7 @@ if ((position >= len(input_tokens)) ): # and (not morphological_state) - if (self.debug) : print ' AT END OF WORD' + if (self.debug) : print(' AT END OF WORD') # FOR RECOGNIZER # this will yield some words twice, not all # also, recognizer is failing to put on the added information like "+genetive" @@ -2596,16 +2596,16 @@ if (required_truth_value != truth_value): if (self.debug): - print ' BLOCKED by rule {%d %s %s}' % (start, rule, required_truth_value) - print fsa_states + print(' BLOCKED by rule {%d %s %s}' % (start, rule, required_truth_value)) + print(fsa_states) break else: if 0: # (self.debug): - print ' passed rule {%d %s %s}' % (start, rule, required_truth_value) + print(' passed rule {%d %s %s}' % (start, rule, required_truth_value)) else: if (self.debug): - print ' SUCCESS!' + print(' SUCCESS!') yield result_str, result_words else: if morphological_state: # recognizer; get the next possible surface chars that can result in @@ -2666,7 +2666,7 @@ break else: if (0): # (self.debug): - print ' passed rule {%d %s %s}' % (start, rule, required_truth_value) + print(' passed rule {%d %s %s}' % (start, rule, required_truth_value)) elif (len(next_fsa_state_set) == 0): # if it isn't true, then it will have to fail, bcs we are at # the end of the state set. @@ -2676,15 +2676,15 @@ break else: if (0): # (self.debug): - print ' passed rule {%d %s %s}' % (start, rule, required_truth_value) + print(' passed rule {%d %s %s}' % (start, rule, required_truth_value)) else: next_rule_states.append( (start, rule, next_fsa_state_set, required_truth_value) ) - if (self.debug) : print rule_state_debug + if (self.debug) : print(rule_state_debug) if (fail): if (self.debug): - print ' BLOCKED by rule %s' % (fail,) + print(' BLOCKED by rule %s' % (fail,)) continue @@ -2703,7 +2703,7 @@ if (rule.rightFSA()): if (self.debug): - print ' adding rule {%d %s %s}' % (position, rule, required_truth_value) + print(' adding rule {%d %s %s}' % (position, rule, required_truth_value)) next_rule_states.append( (position, rule, [ rule.rightFSA().start() ], required_truth_value) ) else: if (required_truth_value == False): @@ -2711,7 +2711,7 @@ continue else: if (0): # (self.debug): - print ' passed rule ' + str(rule) + print(' passed rule ' + str(rule)) # if did not fail, call recursively on next chars if (fail == None): @@ -2748,7 +2748,7 @@ yield o else: if (self.debug): - print ' BLOCKED by rule ' + str(fail) + print(' BLOCKED by rule ' + str(fail)) def _initial_rule_states(self): return [ (0, rule, [ rule.start() ], True) for rule in self.rules() if isinstance(rule, KimmoFSARule)] @@ -2771,9 +2771,9 @@ if not morphology_state: - print "Bad Morphological State, failing recognition" + print("Bad Morphological State, failing recognition") return - if (self.debug) : print 'recognize: ' + input_tokens + if (self.debug) : print('recognize: ' + input_tokens) # print output_words for o in self._generate(input_tokens, 0, self._initial_rule_states(), morphology_state, [], [], '', output_words, invert): @@ -2828,18 +2828,18 @@ path = os.path.expanduser(filename) try: f = open(path, 'r') - except IOError, e: + except IOError as e: path = find_corpus_file("kimmo", filenRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/kimmo.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk_contrib/nltk_contrib/misc/kimmo.py ### RefactoringTool: Line 963: could not convert: raise "Dummy" RefactoringTool: Python 3 does not support string exceptions ame) try: f = open(path, 'r') - except IOError, e: + except IOError as e: if gui: gui.guiError(str(e)) else: - print str(e) - print "FAILURE" + print(str(e)) + print("FAILURE") return "" - print "Loaded:", path + print("Loaded:", path) return f # MAIN @@ -2866,20 +2866,20 @@ elif x == "debug": console_debug = 1 - print 'Tips:' - print 'kimmo.cfg is loaded by default, so if you name your project that, ' - print "it will be loaded at startup\n" - - print 'For commandline operation:' - print ' (for instance if you want to use a different editor)' - print "To Recognize:" - print " % python kimmo.py english.lex english.rul -r:cats" - print "To Generate:" - print " % python kimmo.py english.lex english.rul -g:cat+s" - print "To Batch Test:" - print " % python kimmo.py english.lex english.rul english.batch_test" - print "With Debug and Tracing:" - print " % python kimmo.py english.lex english.rul -r:cats debug\n" + print('Tips:') + print('kimmo.cfg is loaded by default, so if you name your project that, ') + print("it will be loaded at startup\n") + + print('For commandline operation:') + print(' (for instance if you want to use a different editor)') + print("To Recognize:") + print(" % python kimmo.py english.lex english.rul -r:cats") + print("To Generate:") + print(" % python kimmo.py english.lex english.rul -g:cat+s") + print("To Batch Test:") + print(" % python kimmo.py english.lex english.rul english.batch_test") + print("With Debug and Tracing:") + print(" % python kimmo.py english.lex english.rul -r:cats debug\n") # print filename_lex @@ -2894,17 +2894,17 @@ # creation failed, stop if not kimmoinstance.ok : - print kimmoinstance.errors + print(kimmoinstance.errors) sys.exit() if recognize_string: recognize_results = kimmoinstance.recognize(recognize_string) - print recognize_results + print(recognize_results) if generate_string: generate_results = kimmoinstance.generate(generate_string) - print generate_results # remember to format + print(generate_results) # remember to format if filename_batch_test: # run a batch kimmoinstance.batch(filename_batch_test) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/huffman.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/huffman.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/huffman.py --- ../python3/nltk_contrib/nltk_contrib/misc/huffman.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/huffman.py (refactored) @@ -5,7 +5,7 @@ from operator import itemgetter def huffman_tree(text): - coding = nltk.FreqDist(text).items() + coding = list(nltk.FreqDist(text).items()) coding.sort(key=itemgetter(1)) while len(coding) > 1: a, b = coding[:2] @@ -67,8 +67,8 @@ text_len = len(text) comp_len = len(encode(code_tree, text)) / 8.0 compression = (text_len - comp_len) / text_len - print compression, - print + print(compression, end=' ') + print() trial(train1, [test1, test2, test3]) trial(train2, [test1, test2, test3]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/fsa.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/fsa.py --- ../python3/nltk_contrib/nltk_contrib/misc/fsa.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/fsa.py (refactored) @@ -64,8 +64,8 @@ A generator that yields each transition arrow in the FSA in the form (source, label, target). """ - for (state, map) in self._transitions.items(): - for (symbol, targets) in map.items(): + for (state, map) in list(self._transitions.items()): + for (symbol, targets) in list(map.items()): for target in targets: yield (state, symbol, target) @@ -74,7 +74,7 @@ A generator for all possible labels taking state s1 to state s2. """ map = self._transitions.get(s1, {}) - for (symbol, targets) in map.items(): + for (symbol, targets) in list(map.items()): if s2 in targets: yield symbol def sigma(self): @@ -127,7 +127,7 @@ @returns: a list of all states in the FSA. @rtype: list """ - return self._transitions.keys() + return list(self._transitions.keys()) def add_final(self, state): """ @@ -177,9 +177,9 @@ @param s2: the destination of the transition """ if s1 not in self.states(): - raise ValueError, "State %s does not exist" % s1 + raise ValueError("State %s does not exist" % s1) if s2 not in self.states(): - raise ValueError, "State %s does not exist" % s1 + raise ValueError("State %s does not exist" % s1) self._add_transition(self._transitions, s1, label, s2) self._add_transition(self._reverse, s2, label, s1) @@ -203,16 +203,16 @@ @param s2: the destination of the transition """ if s1 not in self.states(): - raise ValueError, "State %s does not exist" % s1 + raise ValueError("State %s does not exist" % s1) if s2 not in self.states(): - raise ValueError, "State %s does not exist" % s1 + raise ValueError("State %s does not exist" % s1) self._del_transition(self._transitions, s1, label, s2) self._del_transition(self._reverse, s2, label, s1) def delete_state(self, state): "Removes a state and all its transitions from the FSA." if state not in self.states(): - raise ValueError, "State %s does not exist" % state + raise ValueError("State %s does not exist" % state) for (s1, label, s2) in self.incident_transitions(state): self.delete(s1, label, s2) del self._transitions[state] @@ -226,10 +226,10 @@ result = set() forward = self._transitions[state] backward = self._reverse[state] - for label, targets in forward.items(): + for label, targets in list(forward.items()): for target in targets: result.add((state, label, target)) - for label, targets in backward.items(): + for label, targets in list(backward.items()): for target in targets: result.add((target, label, state)) return result @@ -239,9 +239,9 @@ Assigns a state a new identifier. """ if old not in self.states(): - raise ValueError, "State %s does not exist" % old + raise ValueError("State %s does not exist" % old) if new in self.states(): - raise ValueError, "State %s already exists" % new + raise ValueError("State %s already exists" % new) changes = [] for (s1, symbol, s2) in self.generate_transitions(): if s1 == old and s2 == old: @@ -274,8 +274,8 @@ Return whether this is a DFA (every symbol leads from a state to at most one target state). """ - for map in self._transitions.values(): - for targets in map.values(): + for map in list(self._transitions.values()): + for targets in list(map.values()): if len(targets) > 1: return False RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/fsa.py return True @@ -287,14 +287,14 @@ """ next = self.next(state, symbol) if len(next) > 1: - raise ValueError, "This FSA is nondeterministic -- use nextStates instead." + raise ValueError("This FSA is nondeterministic -- use nextStates instead.") elif len(next) == 1: return list(next)[0] else: return None def forward_traverse(self, state): "All states reachable by following transitions from a given state." result = set() - for (symbol, targets) in self._transitions[state].items(): + for (symbol, targets) in list(self._transitions[state].items()): result = result.union(targets) return result @@ -302,7 +302,7 @@ """All states from which a given state is reachable by following transitions.""" result = set() - for (symbol, targets) in self._reverse[state].items(): + for (symbol, targets) in list(self._reverse[state].items()): result = result.union(targets) return result @@ -334,7 +334,7 @@ self._clean_map(self._reverse[state]) def _clean_map(self, map): - for (key, value) in map.items(): + for (key, value) in list(map.items()): if len(value) == 0: del map[key] @@ -396,7 +396,7 @@ for label in self.sigma(): nfa_next = tuple(self.e_closure(self.move(map[dfa_state], label))) - if map.has_key(nfa_next): + if nfa_next in map: dfa_next = map[nfa_next] else: dfa_next = dfa.new_state() @@ -412,7 +412,7 @@ "Generate all accepting sequences of length at most maxlen." if maxlen > 0: if state in self._finals: - print prefix + print(prefix) for (s1, labels, s2) in self.outgoing_transitions(state): for label in labels(): self.generate(maxlen-1, s2, prefix+label) @@ -421,14 +421,14 @@ """ Print a representation of this FSA (in human-readable YAML format). """ - print yaml.dump(self) + print(yaml.dump(self)) @classmethod def from_yaml(cls, loader, node): map = loader.construct_mapping(node) result = cls(map.get('sigma', []), {}, map.get('finals', [])) - for (s1, map1) in map['transitions'].items(): - for (symbol, targets) in map1.items(): + for (s1, map1) in list(map['transitions'].items()): + for (symbol, targets) in list(map1.items()): for s2 in targets: result.insert(s1, symbol, s2) return result @@ -551,19 +551,19 @@ # Use a regular expression to initialize the FSA. re = 'abcd' - print 'Regular Expression:', re + print('Regular Expression:', re) re2nfa(fsa, re) - print "NFA:" + print("NFA:") fsa.pp() # Convert the (nondeterministic) FSA to a deterministic FSA. dfa = fsa.dfa() - print "DFA:" + print("DFA:") dfa.pp() # Prune the DFA dfa.prune() - print "PRUNED DFA:" + print("PRUNED DFA:") dfa.pp() # Use the FSA to generate all strings of length less than 3 + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/didyoumean.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/didyoumean.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/didyoumean.py --- ../python3/nltk_contrib/nltk_contrib/misc/didyoumean.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/didyoumean.py (refactored) @@ -31,7 +31,7 @@ def test(self, token): hashed = self.specialhash(token) if hashed in self.learned: - words = self.learned[hashed].items() + words = list(self.learned[hashed].items()) sortby(words, 1, reverse=1) if token in [i[0] for i in words]: return 'This word seems OK' @@ -59,7 +59,7 @@ d.learn() # choice of words to be relevant related to the brown corpus for i in "birdd, oklaoma, emphasise, bird, carot".split(", "): - print i, "-", d.test(i) + print(i, "-", d.test(i)) if __name__ == "__main__": demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/annotationgraph.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/misc/annotationgraph.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/misc/annotationgraph.py --- ../python3/nltk_contrib/nltk_contrib/misc/annotationgraph.py (original) +++ ../python3/nltk_contrib/nltk_contrib/misc/annotationgraph.py (refactored) @@ -12,7 +12,7 @@ def __init__(self, t): self._edges = [] self._len = len(t.leaves()) - self._nodes = range(self._len) + self._nodes = list(range(self._len)) self._convert(t, 0) self._index = Index((start, (end, label)) for (start, end, label) in self._edges) @@ -75,7 +75,7 @@ t = Tree(s) ag = AnnotationGraph(t) for p in ag.pas2([]): - print p + print(p) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/misc/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasview.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasview.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasview.py --- ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasview.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasview.py (refactored) @@ -1,10 +1,10 @@ from qt import * from qtcanvas import * -from treecanvasnode import * -from nodefeaturedialog import * -from translator import translate -from axis import * -import lpath +from .treecanvasnode import * +from .nodefeaturedialog import * +from .translator import translate +from .axis import * +from . import lpath import math class FilterExpressionPopup(QLabel): @@ -59,7 +59,7 @@ s,ans = QInputDialog.getText('Edit Filter Expression','Enter new filter expression', QLineEdit.Normal,self.text(),self) if ans: - s = unicode(s).strip() + s = str(s).strip() if s: self.node.filterExpression = s else: @@ -132,7 +132,7 @@ s,ans = QInputDialog.getText('New Filter Expression','Enter filter expression', QLineEdit.Normal,s,self) if ans: - s = unicode(s).strip() + s = str(s).strip() if s: if lpath.translate("//A[%s]"%s) is None: QMessageBox.critical(self,"Error","Invalid filter expression.") @@ -147,7 +147,7 @@ s,ans = QInputDialog.getText('Edit Label','Enter new label', QLineEdit.Normal,item.node.label,self) if ans: - s = unicode(s).strip() + s = str(s).strip() if s: if 'originalLabel' not in item.node.data: item.node.data['originalLabel'] = item.node.label + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasnode.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasnode.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasnode.py --- ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasnode.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/treecanvasnode.py (refactored) @@ -1,6 +1,6 @@ from qt import * from qtcanvas import * -from lpathtree_qt import * +from .lpathtree_qt import * class Point: def __init__(self, *args): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/treecanvas.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/treecanvas.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/treecanvas.py --- ../python3/nltk_contrib/nltk_contrib/lpath/treecanvas.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/treecanvas.py (refactored) @@ -1,6 +1,6 @@ from qtcanvas import * from qt import * -from treecanvasnode import * +from .treecanvasnode import * __all__ = ["TreeCanvas"] @@ -105,7 +105,7 @@ item = node.gui item2 = node.parent.gui coords = item.connectingLine(item2) - apply(node.line.setPoints, coords) + node.line.setPoints(*coords) node.show() self.collapse(self._data) @@ -143,7 +143,7 @@ line = QCanvasLine(self) line.setPen(pen) node.line = line - apply(line.setPoints, coords) + line.setPoints(*coords) node.show() self._w = self._width[self._data] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/translator.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/translator.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/translator.py --- ../python3/nltk_contrib/nltk_contrib/lpath/translator.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/translator.py (refactored) @@ -1,5 +1,5 @@ -from StringIO import StringIO -import at_lite as at +from io import StringIO +from . import at_lite as at __all__ = ["translate", "translate_sub"] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/sqlviewdialog.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/sqlviewdialog.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/sqlviewdialog.py --- ../python3/nltk_contrib/nltk_contrib/lpath/sqlviewdialog.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/sqlviewdialog.py (refactored) @@ -1,5 +1,5 @@ from qt import * -import lpath +from . import lpath class SqlViewDialog(QDialog): def __init__(self, lpql=None, parent=None, name=None, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/qba.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/qba.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/qba.py --- ../python3/nltk_contrib/nltk_contrib/lpath/qba.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/qba.py (refactored) @@ -2,17 +2,17 @@ import os from qt import * from qtcanvas import * -from treecanvas import * -from treecanvasview import * -from lpathtree_qt import * -from axis import * -from db import * -from dbdialog import * -from sqlviewdialog import * -from overlay import * -from translator import translate -from parselpath import parse_lpath -from lpath import tokenize +from .treecanvas import * +from .treecanvasview import * +from .lpathtree_qt import * +from .axis import * +from .db import * +from .dbdialog import * +from .sqlviewdialog import * +from .overlay import * +from .translator import translate +from .parselpath import parse_lpath +from .lpath import tokenize class QBA(QMainWindow): def __init__(self, tree=None): @@ -171,7 +171,7 @@ "XPM (*.xpm)") if d.exec_loop() == QDialog.Rejected: return filenam = d.selectedFile() - filenam = unicode(filenam) + filenam = str(filenam) self._saveImageDir = os.path.dirname(filenam) if os.path.exists(filenam): res = QMessageBox.question( @@ -262,7 +262,7 @@ app.setMainWidget(w) if len(sys.argv) == 2: generator = LPathTreeModel.importTreebank(file(sys.argv[1])) - w.setTree(generator.next()) + w.setTree(next(generator)) w.show() w.setCaption('LPath QBA') # this is only necessary on windows app.exec_loop() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/parselpath.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/parselpath.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/parselpath.py --- ../python3/nltk_contrib/nltk_contrib/lpath/parselpath.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/parselpath.py (refactored) @@ -1,6 +1,6 @@ -from lpath import tokenize -from lpathtree import LPathTreeModel -from translator import translate +from .lpath import tokenize +from .lpathtree import LPathTreeModel +from .translator import translate SCOPE = ['{','}'] BRANCH = ['[',']'] @@ -129,17 +129,17 @@ def f(t, n): if t is not None: - print (" "*n) + t.data['label'] + print((" "*n) + t.data['label']) for c in t.children: f(c, n+4) def g(t, n): if t is not None: - print (" "*n) + t.data['label'] + print((" "*n) + t.data['label']) for c in t.lpChildren: g(c, n+4) else: - print " "*n + "None" + print(" "*n + "None") g(t,0) - print translate(t) + print(translate(t)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/overlay.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/overlay.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/overlay.py --- ../python3/nltk_contrib/nltk_contrib/lpath/overlay.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/overlay.py (refactored) @@ -1,5 +1,5 @@ import re -from translator import translate_sub +from .translator import translate_sub __all__ = ["find_overlays", "Overlay"]; @@ -138,7 +138,7 @@ M = [] for match in TAB: - m = match.items() + m = list(match.items()) m.sort() L = [] for sym,tup in m: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/nodefeaturedialog.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/nodefeaturedialog.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/nodefeaturedialog.py --- ../python3/nltk_contrib/nltk_contrib/lpath/nodefeaturedialog.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/nodefeaturedialog.py (refactored) @@ -1,6 +1,6 @@ from qt import * -from at_lite import TableModel, TableEdit -import lpath +from .at_lite import TableModel, TableEdit +from . import lpath class NodeFeatureDialog(QDialog): def __init__(self, node, parent): @@ -8,7 +8,7 @@ self.setCaption('Node Attribute Dialog') self.resize(320,240) - tab = TableModel([("Name",unicode),("Value",unicode)]) + tab = TableModel([("Name",str),("Value",str)]) tab.insertRow(None, ['label',node.data['label']]) if '@func' in node.data: for v in node.data['@func']: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree_qt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree_qt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree_qt.py --- ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree_qt.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree_qt.py (refactored) @@ -1,5 +1,5 @@ -from lpathtree import LPathTreeModel as PureLPathTree -from axis import * +from .lpathtree import LPathTreeModel as PureLPathTree +from .axis import * from qt import QObject __all__ = ['LPathTreeModel'] @@ -55,7 +55,7 @@ self.axis = cls(self.gui.canvas()) self.axis.target = target self.axis.root = root - apply(self.axis.setPoints, coords) + self.axis.setPoints(*coords) if self.getNot(): self.axis.setHeadType(Axis.HeadNegation) elif not self.lpOnMainTrunk(): @@ -71,7 +71,7 @@ node.axis.target = node #coords = node.gui.connectingLine(self.gui) coords = self.gui.connectingLine(node.gui) - apply(node.axis.setPoints, coords) + node.axis.setPoints(*coords) if node.getNot(): node.axis.setHeadType(Axis.HeadNegation) elif not node.lpOnMainTrunk(): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree.py --- ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/lpathtree.py (refactored) @@ -1,5 +1,5 @@ -import at_lite as at -from at_lite.tree import TreeModel as PureTree +from . import at_lite as at +from .at_lite.tree import TreeModel as PureTree __all__ = ['LPathTreeModel'] @@ -380,7 +380,7 @@ L = [] if self.lpScope is not None: def f(node): - if node.lpScope == self.lpScope and filter(node): + if node.lpScope == self.lpScope and list(filter(node)): L.append(node) self.root.dfs(f) return L + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/lpath/tb2tbl.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/lpath/tb2tbl.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/lpath/tb2tbl.py --- ../python3/nltk_contrib/nltk_contrib/lpath/lpath/tb2tbl.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/lpath/tb2tbl.py (refactored) @@ -13,7 +13,7 @@ #conn.begin() #cursor.execute("begin") for r in tree.exportLPathTable(TableModel,a,b): - print r + print(r) cursor.execute(SQL1, tuple(r)) #cursor.execute("commit") conn.commit() @@ -25,8 +25,8 @@ conn = PgSQL.connect( host=opts.host, port=opts.port, database=opts.db, user=opts.user, password=opts.passwd) - except PgSQL.libpq.DatabaseError, e: - print e + except PgSQL.libpq.DatabaseError as e: + print(e) sys.exit(1) return conn elif opts.servertype == 'oracle': @@ -43,12 +43,12 @@ try: conn = MySQLdb.connect(host=opts.host, port=opts.port, db=opts.db, user=opts.user, passwd=opts.passwd) - except DatabaseError, e: - print e + except DatabaseError as e: + print(e) sys.exit(1) return conn - except ImportError, e: - print e + except ImportError as e: + print(e) sys.exit(1) def limit(servertype, sql, num): @@ -154,7 +154,7 @@ optpar.error("user name is missing") if opts.passwd is None: - print "Password:", + print("Password:", end=' ') opts.passwd = getpass() else: passwd = opts.passwd @@ -186,20 +186,20 @@ conn = connectdb(opts) cursor = conn.cursor() - print os.path.join('',os.path.dirname(sys.argv[0])) + print(os.path.join('',os.path.dirname(sys.argv[0]))) # check if table exists try: sql = limit(opts.servertype, "select * from "+opts.table, 1) cursor.execute(sql) - except DatabaseError, e: + except DatabaseError as e: if opts.create: p = os.path.join(os.path.dirname(sys.argv[0]),'lpath-schema.sql') for line in file(p).read().replace("TABLE",opts.table).split(';'): if line.strip(): cursor.execute(line) else: - print "table %s doesn't exist" % `opts.table` + print("table %s doesn't exist" % repr(opts.table)) sys.exit(1) # set correct table name in the insertion SQL @@ -232,20 +232,20 @@ reader = codecs.getreader('utf-8') if tbdir == '-': for tree in TreeModel.importTreebank(reader(sys.stdin)): - print tree + print(tree) do(tree) count -= 1 if count == 0: break else: for root, dirs, files in os.walk(tbdir): for f in files: - print f, + print(f, end=' ') if filter.match(f): p = os.path.join(root,f) for tree in TreeModel.importTreebank(reader(file(p))): do(tree) count -= 1 if count == 0: sys.exit(0) # done - print sid + print(sid) else: - print 'skipped' + print('skipped') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/lpath/lpath.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/lpath/lpath.py --- ../python3/nltk_contrib/nltk_contrib/lpath/lpath/lpath.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/lpath/lpath.py (refactored) @@ -142,11 +142,11 @@ L = [] for x in self: if isinstance(x, str): - L.append(unicode(x)) - elif isinstance(x, unicode): + L.append(str(x)) + elif isinstance(x, str): L.append(x) elif isinstance(x, AND) or isinstance(x, OR) or isinstance(x, NOT): - L.append(unicode(x)) + L.append(str(x)) elif isinstance(x, flatten): for e in x: L.append("%s%s%s" % tuple(e)) @@ -155,7 +155,7 @@ elif isinstance(x, Trans): L.append("exists (%s)" % x.getSql()) else: - L.append(unicode(x)) + L.append(str(x)) L.append(self.joiner) return "(" + " ".join(L[:-1]) + ")" @@ -182,7 +182,7 @@ return "not " + str(self.lst) def __unicode__(self): - return "not " + unicode(self.lst) + return "not " + str(self.lst) class flatten(list): @@ -219,7 +219,7 @@ if hasattr(self, k): eval('self.' + k) else: - raise(AttributeError("Step instance has no attribute '%s'" % k)) + raise AttributeError class Trans: @@ -286,7 +286,7 @@ s2 = self.steps[i+1] self._interpreteAxis(s, s2.axis, s2) - w = unicode(self.WHERE).strip() + w = str(self.WHERE).strip() if w: sql += "where %s" % w return sql @@ -295,7 +295,7 @@ name = "_" + t.node for c in t: name += "_" - if isinstance(c,str) or isinstance(c,unicode): + if isinstance(c,str) or isinstance(c,str): name += self.TR[c] else: name += c.node @@ -357,7 +357,7 @@ [step1.left, "<=", step2.left], [step1.right, ">=", step2.right], [step1.depth, "<", step2.depth], - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) ] elif step2.conditional == '*': self.WHERE += [ @@ -366,7 +366,7 @@ AND([step1.left, "<=", step2.left], [step1.right, ">=", step2.right], [step1.depth, "<", step2.depth], - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE))) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE))) )) ] @@ -408,7 +408,7 @@ [step1.left, ">=", step2.left], [step1.right, "<=", step2.right], [step1.depth, ">", step2.depth], - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) ] elif step2.conditional == '*': self.WHERE += [ @@ -417,7 +417,7 @@ AND([step1.left, ">=", step2.left], [step1.right, "<=", step2.right], [step1.depth, ">", step2.depth], - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE))) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE))) )) ] @@ -449,8 +449,8 @@ ["z.left", ">=", step1.right], ["z.right", "<=", step2.left], NOT(GRP(flatten(step2.getConstraints()))), - "not exists (select 1 from %s c where %s)" % (self.tname,unicode(cWHERE)), - "not exists (select 1 from %s w where %s)" % (self.tname,unicode(wWHERE)) + "not exists (select 1 from %s c where %s)" % (self.tname,str(cWHERE)), + "not exists (select 1 from %s w where %s)" % (self.tname,str(wWHERE)) ) self.WHERE += [ @@ -470,7 +470,7 @@ self.WHERE += [ [step1.right, "<=", step2.left], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) ] elif step2.conditional == '*': self.WHERE += [ @@ -479,7 +479,7 @@ GRP(AND( [step1.right, "<=", step2.left], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) )))) ] @@ -511,8 +511,8 @@ ["z.left", ">=", step2.right], ["z.right", "<=", step1.left], NOT(GRP(flatten(step2.getConstraints()))), - "not exists (select 1 from %s c where %s)" % (self.tname,unicode(cWHERE)), - "not exists (select 1 from %s w where %s)" % (self.tname,unicode(wWHERE)) + "not exists (select 1 from %s c where %s)" % (self.tname,str(cWHERE)), + "not exists (select 1 from %s w where %s)" % (self.tname,str(wWHERE)) ) self.WHERE += [ @@ -532,7 +532,7 @@ self.WHERE += [ [step1.left, ">=", step2.right], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) ] elif step2.conditional == '*': self.WHERE += [ @@ -541,7 +541,7 @@ GRP(AND( [step1.left, ">=", step2.right], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) )))) ] @@ -573,8 +573,8 @@ ["z.left", ">=", step1.right], ["z.right", "<=", step2.left], NOT(GRP(flatten(step2.getConstraints()))), - "not exists (select 1 from %s c where %s)" % (self.tname,unicode(cWHERE)), - "not exists (select 1 from %s w where %s)" % (self.tname,unicode(wWHERE)) + "not exists (select 1 from %s c where %s)" % (self.tname,str(cWHERE)), + "not exists (select 1 from %s w where %s)" % (self.tname,str(wWHERE)) ) self.WHERE += [ @@ -596,7 +596,7 @@ [step1.right, "<=", step2.left], [step1.pid, "=", step2.pid], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) ] elif step2.conditional == '*': self.WHERE += [ @@ -606,7 +606,7 @@ RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/lpath/lpath.py [step1.right, "<=", step2.left], [step1.pid, "=", step2.pid], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) )))) ] @@ -638,8 +638,8 @@ ["z.left", ">=", step2.right], ["z.right", "<=", step1.left], NOT(GRP(flatten(step2.getConstraints()))), - "not exists (select 1 from %s c where %s)" % (self.tname,unicode(cWHERE)), - "not exists (select 1 from %s w where %s)" % (self.tname,unicode(wWHERE)) + "not exists (select 1 from %s c where %s)" % (self.tname,str(cWHERE)), + "not exists (select 1 from %s w where %s)" % (self.tname,str(wWHERE)) ) self.WHERE += [ @@ -661,7 +661,7 @@ [step1.left, ">=", step2.right], [step1.pid, "=", step2.pid], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) ] elif step2.conditional == '*': self.WHERE += [ @@ -671,7 +671,7 @@ [step1.left, ">=", step2.right], [step1.pid, "=", step2.pid], flatten(step2.getConstraints()), - "not exists (select 1 from %s z where %s)" % (self.tname,unicode(zWHERE)) + "not exists (select 1 from %s z where %s)" % (self.tname,str(zWHERE)) )))) ] @@ -936,7 +936,7 @@ s2 = self.steps[i+1] self._interpreteAxis(s, s2.axis, s2) - w = unicode(self.WHERE).strip() + w = str(self.WHERE).strip() if w: sql += "where %s" % w return sql @@ -950,7 +950,7 @@ for i,s in enumerate(tr.steps[:-1]): s2 = tr.steps[i+1] tr._interpreteAxis(s, s2.axis, s2) - self.WHERE.append(unicode(tr.WHERE).strip()) + self.WHERE.append(str(tr.WHERE).strip()) def translate2(q,tname='T'): global T2, T3, T4, T5, T6, GR @@ -998,13 +998,13 @@ def print_profile(): - print - print " python startup: %6.3fs" % (T1-T0) - print " query tokenization: %6.3fs" % (T3-T2) - print " grammar parsing: %6.3fs" % (T4-T3) - print " chart parsing: %6.3fs" % (T5-T4) - print " translation: %6.3fs" % (T6-T5) - print + print() + print(" python startup: %6.3fs" % (T1-T0)) + print(" query tokenization: %6.3fs" % (T3-T2)) + print(" grammar parsing: %6.3fs" % (T4-T3)) + print(" chart parsing: %6.3fs" % (T5-T4)) + print(" translation: %6.3fs" % (T6-T5)) + print() def get_profile(): # tok/grammar/parsing/trans times @@ -1038,6 +1038,6 @@ #l = tokenize('//VP[{//^V->NP->PP$}]') #l = tokenize('//A//B//C') - print translate2(sys.argv[1])[1] + print(translate2(sys.argv[1])[1]) print_profile() #print get_grammar() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/lpath/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/lpath/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/lpath/__init__.py --- ../python3/nltk_contrib/nltk_contrib/lpath/lpath/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/lpath/__init__.py (refactored) @@ -1 +1 @@ -from lpath import * +from .lpath import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/dbdialog.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/dbdialog.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/dbdialog.py --- ../python3/nltk_contrib/nltk_contrib/lpath/dbdialog.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/dbdialog.py (refactored) @@ -1,5 +1,5 @@ from qt import * -from db import * +from .db import * import os try: from pyPgSQL import PgSQL @@ -67,7 +67,7 @@ conn = PgSQL.connect(**conninfo) conn2 = PgSQL.connect(**conninfo) return LPathPgSqlDB(conn, conn2, conninfo["user"].ascii()) - except PgSQL.libpq.DatabaseError, e: + except PgSQL.libpq.DatabaseError as e: try: enc = os.environ['LANG'].split('.')[-1] msg = e.message.decode(enc) @@ -111,7 +111,7 @@ try: conn = cx_Oracle.connect(user+'/'+pw+service) conn2 = cx_Oracle.connect(user+'/'+pw+service) - except cx_Oracle.DatabaseError, e: + except cx_Oracle.DatabaseError as e: try: enc = os.environ['LANG'].split('.')[-1] msg = e.__str__().decode(enc) @@ -157,7 +157,7 @@ try: conn = MySQLdb.connect(**conninfo) return LPathMySQLDB(conn) - except MySQLdb.DatabaseError, e: + except MySQLdb.DatabaseError as e: try: enc = os.environ['LANG'].split('.')[-1] msg = e.message.decode(enc) @@ -235,7 +235,7 @@ try: self.db = self.wstack.visibleWidget().connect() self.accept() - except ConnectionError, e: + except ConnectionError as e: QMessageBox.critical(self, "Connection Error", "Unable to connect to database:\n" + e.__str__()) @@ -276,7 +276,7 @@ def _okClicked(self): sel = self.listbox.selectedItem() if sel is not None: - self.tab = unicode(sel.text()) + self.tab = str(sel.text()) self.accept() else: QMessageBox.critical(self, "Error", "You didn't select a table.") + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/db.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/db.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/db.py --- ../python3/nltk_contrib/nltk_contrib/lpath/db.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/db.py (refactored) @@ -4,14 +4,14 @@ import time from qt import * from threading import Thread, Lock -import lpath -import at_lite as at +from . import lpath +from . import at_lite as at #from pyPgSQL import PgSQL try: from sqlite3 import dbapi2 as sqlite except ImportError: from pysqlite2 import dbapi2 as sqlite -from lpathtree_qt import * +from .lpathtree_qt import * __all__ = ["LPathDB", "LPathDbI", "LPathPgSqlDB", "LPathOracleDB", "LPathMySQLDB"] @@ -89,7 +89,7 @@ LPATH_TABLE_HEADER = [ ('sid',int),('tid',int),('id',int),('pid',int), ('left',int),('right',int),('depth',int), - ('type',unicode),('name',unicode),('value',unicode) + ('type',str),('name',str),('value',str) ] EVENT_MORE_TREE = QEvent.User + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/axis.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/axis.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/axis.py --- ../python3/nltk_contrib/nltk_contrib/lpath/axis.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/axis.py (refactored) @@ -137,7 +137,7 @@ self._drawNegationHead(painter) def drawShape(self, painter): - apply(painter.drawLine, self.points) + painter.drawLine(*self.points) self.drawLineHead(painter) def toggleHeadType(self): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeio.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeio.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeio.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeio.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeio.py (refactored) @@ -14,10 +14,10 @@ continue c = n.children if c: - s += ' (' + unicode(n.data[p]) + s += ' (' + str(n.data[p]) L = c + [None] + L[1:] else: - s += ' ' + unicode(n.data[p]) + s += ' ' + str(n.data[p]) L = L[1:] return s[1:] @@ -118,7 +118,7 @@ # Make sure all the node's application-specific attributes are recorded. r['attributes'] = [] if n.data != None: - for attr, value in n.data.iteritems(): + for attr, value in n.data.items(): if attr == 'label': r['name'] = value else: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeedit_qlistview.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeedit_qlistview.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeedit_qlistview.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeedit_qlistview.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/treeedit_qlistview.py (refactored) @@ -1,5 +1,5 @@ from qt import QListView, QListViewItem, PYSIGNAL -from myaccel import AccelKeyHandler +from .myaccel import AccelKeyHandler __all__ = ['TreeEdit'] @@ -91,7 +91,7 @@ if self.data is None: return n = self.data.__class__() x = [self] + [None] * len(self.col2str) - item = apply(TreeEditItem,x) + item = TreeEditItem(*x) for sig in ("attach","insertLeft","insertRight","prune","splice"): n.connect(n,PYSIGNAL(sig),eval("item._%s"%sig)) self.takeItem(item) @@ -147,7 +147,7 @@ x = [T[-1],n.data[fields[0][0]]] for f,v in fields[1:]: x.append(str(n.data[f])) - e = apply(TreeEditItem, x) + e = TreeEditItem(*x) for sig in ("attach","insertLeft","insertRight","prune","splice"): n.connect(n,PYSIGNAL(sig),eval("e._%s"%sig)) e.treenode = n @@ -159,7 +159,7 @@ if __name__ == "__main__": - from tree_qt import TreeModel + from .tree_qt import TreeModel import qt class Demo(qt.QVBox): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree_qt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree_qt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree_qt.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree_qt.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree_qt.py (refactored) @@ -1,5 +1,5 @@ from qt import QObject, PYSIGNAL -from tree import TreeModel as PureTree +from .tree import TreeModel as PureTree __all__ = ['TreeModel'] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tree.py (refactored) @@ -1,4 +1,4 @@ -from treeio import TreeIo +from .treeio import TreeIo __all__ = ['TreeModel'] @@ -149,4 +149,4 @@ s = "(S (NP (N I)) (VP (VP (V saw) (NP (DT the) (N man))) (PP (P with) (NP (DT a) (N telescope)))))" t = bracket_parse(s) root = TreeModel.importNltkLiteTree(t) - print root.treebankString("label") + print(root.treebankString("label")) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableproxy.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableproxy.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableproxy.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableproxy.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableproxy.py (refactored) @@ -19,7 +19,7 @@ else: return None - def next(self): + def __next__(self): if self._limit > self._top: self._top += 1 return self._stack[self._top] @@ -149,7 +149,7 @@ def redo(self, n=1): for m in range(n): try: - op, arg1, arg2 = self.undoStack.next() + op, arg1, arg2 = next(self.undoStack) #print "redo", op, arg1, arg2 #print len(self.undoStack._stack) except TypeError: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableio.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableio.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableio.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableio.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableio.py (refactored) @@ -27,7 +27,7 @@ import codecs import re -from error import * +from .error import * __all__ = ['TableIo'] @@ -38,7 +38,7 @@ size = [len(str(x)) for x,t in self.header] for row in self.table: for i,c in enumerate(row): - if type(c)==str or type(c)==unicode: + if type(c)==str or type(c)==str: n = len(c) else: n = len(str(c)) @@ -52,7 +52,7 @@ s += "%%%ds|" % size[i] % str(c) else: s += "%%-%ds|" % size[i] % c - print s[:-1] + print(s[:-1]) printRow([s for s,t in self.header]) for row in self.table: @@ -73,7 +73,7 @@ f = writer(file(filename,'w')) f.write("\t".join([a[0]+';'+a[1].__name__ for a in self.header]) + "\n") - for item in self.metadata.items(): + for item in list(self.metadata.items()): f.write(";;MM %s\t%s\n" % item) for row in self.table: for c in row[:-1]: @@ -81,7 +81,7 @@ f.write("\t") else: t = type(c) - if t==str or t==unicode: + if t==str or t==str: f.write(c+"\t") else: f.write(str(c)+"\t") @@ -89,18 +89,18 @@ f.write("\n") else: t = type(row[-1]) - if t==str or t==unicode: + if t==str or t==str: f.write(row[-1]+"\n") else: f.write(str(row[-1])+"\n") - except IOError, e: + except IOError as e: raise Error(ERR_TDF_EXPORT, str(e)) def importTdf(cls, filename): _,_,reader,_ = codecs.lookup('utf-8') try: f = reader(file(filename)) - except IOError, e: + except IOError as e: raise Error(ERR_TDF_IMPORT, e) head = [] for h in f.readline().rstrip("\r\n").split("\t"): @@ -125,10 +125,10 @@ try: for i,cell in enumerate(l.rstrip("\n").split("\t")): row.append(head[i][1](cell)) - except ValueError, e: + except ValueError as e: raise Error(ERR_TDF_IMPORT, "[%d:%d] %s" % (lno,i,str(e))) - except IndexError, e: + except IndexError as e: msg = "record has too many fields" raise Error(ERR_TDF_IMPORT, "[%d:%d] %s" % (lno,i,msg)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableedit_qtable.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableedit_qtable.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableedit_qtable.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableedit_qtable.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/tableedit_qtable.py (refactored) @@ -7,8 +7,8 @@ self.data = None def setData(self, data): - self.removeColumns(range(self.numCols())) - self.removeRows(range(self.numRows())) + self.removeColumns(list(range(self.numCols()))) + self.removeRows(list(range(self.numRows()))) self.setNumCols(len(data.header)) for j,(h,t) in enumerate(data.header): @@ -17,7 +17,7 @@ for i,row in enumerate(data): for j,h in enumerate(row): if h is not None: - if type(h)==str or type(h)==unicode: + if type(h)==str or type(h)==str: self.setText(i,j,h) else: self.setText(i,j,str(h)) @@ -36,7 +36,7 @@ if val is None: val = '' self.disconnect(self,SIGNAL("valueChanged(int,int)"),self.__cellChanged) - self.setText(i,j,unicode(val)) + self.setText(i,j,str(val)) self.connect(self,SIGNAL("valueChanged(int,int)"),self.__cellChanged) def _insertRow(self, i, row): @@ -70,7 +70,7 @@ if __name__ == '__main__': import qt - from table_qt import TableModel + from .table_qt import TableModel class Demo(qt.QVBox): def __init__(self): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table_qt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table_qt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table_qt.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table_qt.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table_qt.py (refactored) @@ -1,5 +1,5 @@ -import tableproxy -from table import TableModel +from . import tableproxy +from .table import TableModel __all__ = ['TableModel'] @@ -17,24 +17,24 @@ tab[1][2] = 3 tab.printTable() - print + print() tab.insertColumn(1,[("extra",int),10,9]) tab.printTable() - print + print() c = tab.takeColumn(1) tab.insertColumn(3,c) tab.printTable() - print + print() r = tab.takeRow(0) tab.insertRow(1,r) tab.printTable() - print + print() tab.sort(1,2) tab.printTable() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/table.py (refactored) @@ -1,4 +1,4 @@ -from tableio import TableIo +from .tableio import TableIo import bisect __all__ = ['TableModel'] @@ -177,7 +177,7 @@ """ if type(col) != int: col = self.str2col[col] - return bisect.bisect_left(map(lambda x:x[col],self.table),val) + return bisect.bisect_left([x[col] for x in self.table],val) def bisect_right(self, col, val): """ @@ -185,7 +185,7 @@ """ if type(col) != int: col = self.str2col[col] - return bisect.bisect_right(map(lambda x:x[col],self.table),val) + return bisect.bisect_right([x[col] for x in self.table],val) def setMetadata(self, nam, val): if type(val) != str: @@ -210,24 +210,24 @@ tab[1][2] = 3 tab.printTable() - print + print() tab.insertColumn(1,["extra",10,9]) tab.printTable() - print + print() c = tab.takeColumn(1) tab.insertColumn(3,c) tab.printTable() - print + print() r = tab.takeRow(0) tab.insertRow(1,r) tab.printTable() - print + print() tab.sort(2,3) tab.printTable() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/myaccel.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/myaccel.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/myaccel.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/myaccel.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/myaccel.py (refactored) @@ -69,7 +69,7 @@ """ bindings = {} - for keyseq,binding in keyBindings.items(): + for keyseq,binding in list(keyBindings.items()): seq = [] for subkeyseq in keyseq.split(','): a = [] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/error.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/error.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/error.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/__init__.py --- ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/at_lite/__init__.py (refactored) @@ -2,9 +2,9 @@ """ """ -from tree_qt import * -from treeedit_qlistview import * -from table_qt import * -from tableedit_qtable import * +from .tree_qt import * +from .treeedit_qlistview import * +from .table_qt import * +from .tableedit_qtable import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lpath/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lpath/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lpath/__init__.py --- ../python3/nltk_contrib/nltk_contrib/lpath/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lpath/__init__.py (refactored) @@ -1 +1 @@ -from lpath import * +from .lpath import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lambek/typedterm.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lambek/typedterm.py --- ../python3/nltk_contrib/nltk_contrib/lambek/typedterm.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lambek/typedterm.py (refactored) @@ -9,7 +9,7 @@ """CG-style types""" import types -from term import * +from .term import * ##################################### # TYPEDTERM @@ -26,14 +26,14 @@ self.type = type def __repr__(self): - return `self.term`+': '+`self.type` + return repr(self.term)+': '+repr(self.type) def pp(self, pp_varmap=None): - return self.term.pp(pp_varmap)+': '+`self.type` + return self.term.pp(pp_varmap)+': '+repr(self.type) def to_latex(self, pp_varmap=None): term = self.term.to_latex(pp_varmap) - type = `self.type` + type = repr(self.type) type = re.sub(r'\\', r'$\\backslash$', type) type = re.sub(r'\*', r'$\\cdot$', type) return term+': \\textrm{'+type+'}' @@ -72,11 +72,11 @@ def __repr__(self): if isinstance(self.result, RSlash) or \ isinstance(self.result, LSlash): - right = '('+`self.result`+')' - else: right = `self.result` + right = '('+repr(self.result)+')' + else: right = repr(self.result) if isinstance(self.arg, RSlash): - left = '('+`self.arg`+')' - else: left = `self.arg` + left = '('+repr(self.arg)+')' + else: left = repr(self.arg) return left + '\\' + right def __cmp__(self, other): if isinstance(other, LSlash) and self.arg == other.arg and \ @@ -95,15 +95,15 @@ raise TypeError('Expected Type arguments') def __repr__(self): if isinstance(self.result, RSlash): - left = '('+`self.result`+')' - else: left = `self.result` - return left + '/' + `self.arg` + left = '('+repr(self.result)+')' + else: left = repr(self.result) + return left + '/' + repr(self.arg) #return '('+`self.result`+'/'+`self.arg`+')' if isinstance(self.arg, LSlash): - return `self.result`+'/('+`self.arg`+')' - else: - return `self.result`+'/'+`self.arg` + return repr(self.result)+'/('+repr(self.arg)+')' + else: + return repr(self.result)+'/'+repr(self.arg) def __cmp__(self, other): if isinstance(other, RSlash) and self.arg == other.arg and \ self.result == other.result: @@ -113,7 +113,7 @@ class BaseType(Type): def __init__(self, name): - if type(name) != types.StringType: + if type(name) != bytes: raise TypeError("Expected a string name") self.name = name def __repr__(self): @@ -131,7 +131,7 @@ if not isinstance(right, Type) or not isinstance(left, Type): raise TypeError('Expected Type arguments') def __repr__(self): - return '('+`self.left`+'*'+`self.right`+')' + return '('+repr(self.left)+'*'+repr(self.right)+')' def __cmp__(self, other): if isinstance(other, Dot) and self.left == other.left and \ self.right == other.right: @@ -205,7 +205,7 @@ else: i += 1 if len(segments) != 1: - print 'Ouch!!', segments, ops + print('Ouch!!', segments, ops) return segments[0] @@ -219,16 +219,16 @@ vp = LSlash(np, s) v2 = RSlash(vp, np) AB = Dot(A, B) - print v2 - print AB - print LSlash(AB, v2) - print Dot(v2, AB) - - print parse_type('A / B') - print parse_type('A \\ B') - print parse_type('A / B / C') - print parse_type('A * B') - print parse_type('A \\ B \\ C') - print parse_type('A \\ (B / C)') - print parse_type('(A / B) \\ C') - print parse_type('(A / B) \\ C') + print(v2) + print(AB) + print(LSlash(AB, v2)) + print(Dot(v2, AB)) + + print(parse_type('A / B')) + print(parse_type('A \\ B')) + print(parse_type('A / B / C')) + print(parse_type('A * B')) + print(parse_type('A \\ B \\ C')) + print(parse_type('A \\ (B / C)')) + print(parse_type(RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lambek/typedterm.py '(A / B) \\ C')) + print(parse_type('(A / B) \\ C')) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lambek/term.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lambek/term.py --- ../python3/nltk_contrib/nltk_contrib/lambek/term.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lambek/term.py (refactored) @@ -25,7 +25,7 @@ Var._max_id += 1 self.id = Var._max_id def __repr__(self): - return '?' + `self.id` + return '?' + repr(self.id) def pp(self, pp_varmap=None): if pp_varmap == None: pp_varmap = make_pp_varmap(self) return pp_varmap[self] @@ -40,7 +40,7 @@ class Const(Term): def __init__(self, name): - if type(name) != types.StringType: + if type(name) != bytes: raise TypeError("Expected a string name") self.name = name def __repr__(self): @@ -64,9 +64,9 @@ def __repr__(self): if isinstance(self.func, Appl) or \ isinstance(self.func, Abstr): - return '('+`self.func` + ')(' + `self.arg` + ')' - else: - return `self.func` + '(' + `self.arg` + ')' + return '('+repr(self.func) + ')(' + repr(self.arg) + ')' + else: + return repr(self.func) + '(' + repr(self.arg) + ')' def pp(self, pp_varmap=None): if pp_varmap == None: pp_varmap = make_pp_varmap(self) if isinstance(self.func, Appl) or \ @@ -101,9 +101,9 @@ def __repr__(self): if isinstance(self.body, Abstr) or \ isinstance(self.body, Appl): - return '(\\' + `self.var` + '.' + `self.body`+')' - else: - return '\\' + `self.var` + '.' + `self.body` + return '(\\' + repr(self.var) + '.' + repr(self.body)+')' + else: + return '\\' + repr(self.var) + '.' + repr(self.body) def pp(self, pp_varmap=None): if pp_varmap == None: pp_varmap = make_pp_varmap(self) if isinstance(self.body, Abstr) or \ @@ -136,7 +136,7 @@ not isinstance(self.right, Term): raise TypeError('Expected Term arguments') def __repr__(self): - return '<'+`self.left`+', '+`self.right`+'>' + return '<'+repr(self.left)+', '+repr(self.right)+'>' def pp(self, pp_varmap=None): if pp_varmap == None: pp_varmap = make_pp_varmap(self) return '<'+self.left.pp(pp_varmap)+', '+\ @@ -160,20 +160,20 @@ # Get the remaining names. freenames = [n for n in Term.FREEVAR_NAME \ - if n not in pp_varmap.values()] + if n not in list(pp_varmap.values())] boundnames = Term.BOUNDVAR_NAME[:] for fv in free: - if not pp_varmap.has_key(fv): + if fv not in pp_varmap: if freenames == []: - pp_varmap[fv] = `fv` + pp_varmap[fv] = repr(fv) else: pp_varmap[fv] = freenames.pop() for bv in bound: - if not pp_varmap.has_key(bv): + if bv not in pp_varmap: if boundnames == []: - pp_varmap[bv] = `bv` + pp_varmap[bv] = repr(bv) else: pp_varmap[bv] = boundnames.pop() @@ -183,7 +183,7 @@ def __init__(self): self._map = {} def add(self, var, term): - if self._map.has_key(var): + if var in self._map: if term != None and term != self._map[var]: # Unclear what I should do here -- for now, just pray # for the best. :) @@ -191,7 +191,7 @@ else: self._map[var] = term def __repr__(self): - return `self._map` + return repr(self._map) def _get(self, var, orig, getNone=1): val = self._map[var] if not getNone and val == None: return var @@ -201,17 +201,17 @@ # Break the loop at an arbitrary point. del(self._map[val]) return val - elif self._map.has_key(val): + elif val in self._map: return(self._get(val, orig, getNone)) else: return val def __getitem__(self, var): - if self._map.has_key(var): + if var in self._map: return self._get(var, var, 1) else: RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lambek/term.py return var def simplify(self, var): - if self._map.has_key(var): + if var in self._map: return self._get(var, var, 0) else: return var @@ -221,7 +221,7 @@ return result def __add__(self, other): result = self.copy() - for var in other._map.keys(): + for var in list(other._map.keys()): result.add(var, other[var]) return result def copy_from(self, other): @@ -251,7 +251,7 @@ _VERBOSE = 0 def unify(term1, term2, varmap=None, depth=0): - if _VERBOSE: print ' '*depth+'>> unify', term1, term2, varmap + if _VERBOSE: print(' '*depth+'>> unify', term1, term2, varmap) term1 = reduce(term1) term2 = reduce(term2) if varmap == None: varmap = VarMap() @@ -260,18 +260,18 @@ result = unify_oneway(term1, term2, varmap, depth+1) if result: if _VERBOSE: - print ' '*depth+'<', result + print(' '*depth+'<', result) return result varmap.copy_from(old_varmap) result = unify_oneway(term2, term1, varmap, depth+1) if result: if _VERBOSE: - print ' '*depth+'<', result + print(' '*depth+'<', result) return result #raise(ValueError("can't unify", term1, term2, varmap)) if _VERBOSE: - print ' '*depth+'unify', term1, term2, varmap, '=>', None + print(' '*depth+'unify', term1, term2, varmap, '=>', None) return None @@ -514,7 +514,7 @@ var = re.match(r'\?(.*)', str) if var: varname = var.groups()[0] - if varmap.has_key(varname): + if varname in varmap: return varmap[varname] else: var = Var() @@ -535,22 +535,22 @@ f3 = Abstr(x, Appl(c, x)) f4 = Abstr(y, Appl(c, y)) - print f1, '=>', reduce(f1) - print f2, '=>', reduce(f2) - print f3, '=>', reduce(f3) - - print f1.pp() - print f2.pp() - print f3.pp() - - print - print unify(x, y) - print unify(x, c) - print unify(x, f1) - print unify(f3, f4) - print unify(Abstr(x,Appl(x,x)), Abstr(y,Appl(y,y))) - - print parse_term('<(\?var.(?var))(?other_var),?x>').pp() + print(f1, '=>', reduce(f1)) + print(f2, '=>', reduce(f2)) + print(f3, '=>', reduce(f3)) + + print(f1.pp()) + print(f2.pp()) + print(f3.pp()) + + print() + print(unify(x, y)) + print(unify(x, c)) + print(unify(x, f1)) + print(unify(f3, f4)) + print(unify(Abstr(x,Appl(x,x)), Abstr(y,Appl(y,y)))) + + print(parse_term('<(\?var.(?var))(?other_var),?x>').pp()) reduce(parse_term('')) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lambek/lexicon.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lambek/lexicon.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lambek/lexicon.py --- ../python3/nltk_contrib/nltk_contrib/lambek/lexicon.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lambek/lexicon.py (refactored) @@ -13,8 +13,8 @@ """ -from term import * -from typedterm import * +from .term import * +from .typedterm import * # Map from word to TypedTerm class Lexicon: @@ -29,16 +29,16 @@ (word, term, type) = line.split(':') te = TypedTerm(parse_term(term), parse_type(type)) except ValueError: - print 'Bad line:', line + print('Bad line:', line) continue word = word.strip().lower() - if self._map.has_key(word): - print 'Duplicate definitions for', word + if word in self._map: + print('Duplicate definitions for', word) self._map[word] = te def words(self): - return self._map.keys() + return list(self._map.keys()) def __getitem__(self, word): word = word.strip().lower() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lambek/lambek.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/lambek/lambek.py --- ../python3/nltk_contrib/nltk_contrib/lambek/lambek.py (original) +++ ../python3/nltk_contrib/nltk_contrib/lambek/lambek.py (refactored) @@ -18,9 +18,9 @@ _VAR_NAMES = 1 _SHOW_VARMAP = not _VAR_NAMES -from term import * -from typedterm import * -from lexicon import * +from .term import * +from .typedterm import * +from .lexicon import * import sys, re class Sequent: @@ -30,8 +30,8 @@ def __init__(self, left, right): # Check types, because we're paranoid. - if type(left) not in [types.ListType, types.TupleType] or \ - type(right) not in [types.ListType, types.TupleType]: + if type(left) not in [list, tuple] or \ + type(right) not in [list, tuple]: raise TypeError('Expected lists of TypedTerms') for elt in left+right: if not isinstance(elt, TypedTerm): @@ -41,8 +41,8 @@ self.right = right def __repr__(self): - left_str = `self.left`[1:-1] - right_str = `self.right`[1:-1] + left_str = repr(self.left)[1:-1] + right_str = repr(self.right)[1:-1] return left_str + ' => ' + right_str def to_latex(self, pp_varmap=None): @@ -86,8 +86,8 @@ self.varmap = varmap def __repr__(self): - return self.rule+' '+`self.assumptions`+' -> '\ - +`self.conclusion` + return self.rule+' '+repr(self.assumptions)+' -> '\ + +repr(self.conclusion) def simplify(self, varmap=None): if varmap == None: @@ -157,7 +157,7 @@ if _VAR_NAMES: concl = self.conclusion.pp(pp_varmap) else: - concl = `self.conclusion` + concl = repr(self.conclusion) # Draw assumptions for assumption in self.assumptions: @@ -175,7 +175,7 @@ if toplevel: if _SHOW_VARMAP: - return str+'\nVarmap: '+ `self.varmap`+'\n' + return str+'\nVarmap: '+ repr(self.varmap)+'\n' else: return str else: @@ -225,7 +225,7 @@ def _prove(sequent, varmap, short_circuit, depth): if _VERBOSE: - print (' '*depth)+'Trying to prove', sequent + print((' '*depth)+'Trying to prove', sequent) proofs = [] @@ -245,7 +245,7 @@ proofs = proofs + dot_r(sequent, varmap, short_circuit, depth+1) if _VERBOSE: - print ' '*depth+'Found '+`len(proofs)`+' proof(s)' + print(' '*depth+'Found '+repr(len(proofs))+' proof(s)') return proofs @@ -506,14 +506,14 @@ sq = Sequent(left, right) proofs = prove(sq, short_circuit) if proofs: - print '#'*60 - print "## Proof(s) for", sq.pp() + print('#'*60) + print("## Proof(s) for", sq.pp()) for proof in proofs: - print - print proof.to_latex() + print() + print(proof.to_latex()) else: - print '#'*60 - print "## Can't prove", sq.pp() + print('#'*60) + print("## Can't prove", sq.pp()) def test_lambek(): lex = Lexicon() @@ -573,70 +573,70 @@ if str.lower().endswith('off'): latexmode = 0 elif str.lower().endswith('on'): latexmode = 1 else: latexmode = not latexmode - if latexmode: print >>out, '% latexmode on' - else: print >>out, 'latexmode off' + if latexmode: print('% latexmode on', file=out) + else: print('latexmode off', file=out) elif str.lower().startswith('short'): if str.lower().endswith('off'): shortcircuit = 0 elif str.lower().endswith('on'): shortcircuit = 1 else: shortcircuit = not shortcircuit - if shortcircuit: print >>out, '%shortcircuit on' - else: print >>out, '% shortcircuit off' + if shortcircuit: print('%shortcircuit on', file=out) + else: print('% shortcircuit off', file=out) elif str.lower().startswith('lex'): words =RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/lambek/lambek.py lex.words() - print >>out, '% Lexicon: ' + print('% Lexicon: ', file=out) for word in words: - print >>out, '% ' + word + ':', \ - ' '*(14-len(word)) + lex[word].pp() + print('% ' + word + ':', \ + ' '*(14-len(word)) + lex[word].pp(), file=out) elif str.lower().startswith('q'): return elif str.lower().startswith('x'): return else: - print >>out, HELP + print(HELP, file=out) else: try: (left, right) = str.split('=>') seq = Sequent(lex.parse(left), lex.parse(right)) proofs = prove(seq, shortcircuit) - print >>out - print >>out, '%'*60 + print(file=out) + print('%'*60, file=out) if proofs: - print >>out, "%% Proof(s) for", seq.pp() + print("%% Proof(s) for", seq.pp(), file=out) for proof in proofs: - print >>out - if latexmode: print >>out, proof.to_latex() - else: print >>out, proof.pp() + print(file=out) + if latexmode: print(proof.to_latex(), file=out) + else: print(proof.pp(), file=out) else: - print >>out, "%% Can't prove", seq.pp() - except KeyError, e: - print 'Mal-formatted sequent' - print 'Key error (unknown lexicon entry?)' - print e - except ValueError, e: - print 'Mal-formatted sequent' - print e + print("%% Can't prove", seq.pp(), file=out) + except KeyError as e: + print('Mal-formatted sequent') + print('Key error (unknown lexicon entry?)') + print(e) + except ValueError as e: + print('Mal-formatted sequent') + print(e) # Usage: argv[0] lexiconfile def main(argv): if (len(argv) != 2) and (len(argv) != 4): - print 'Usage:', argv[0], '' - print 'Usage:', argv[0], ' ' + print('Usage:', argv[0], '') + print('Usage:', argv[0], ' ') return lex = Lexicon() try: lex.load(open(argv[1], 'r')) except: - print "Error loading lexicon file" + print("Error loading lexicon file") return if len(argv) == 2: mainloop(sys.stdin, sys.stdout, lex, 0, 1) else: out = open(argv[3], 'w') - print >>out, '\documentclass{article}' - print >>out, '\usepackage{fullpage}' - print >>out, '\\begin{document}' - print >>out + print('\documentclass{article}', file=out) + print('\\usepackage{fullpage}', file=out) + print('\\begin{document}', file=out) + print(file=out) mainloop(open(argv[2], 'r'), out, lex, 1, 1) - print >>out - print >>out, '\\end{document}' + print(file=out) + print('\\end{document}', file=out) if __name__ == '__main__': main(sys.argv) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/lambek/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/word_count/wordcount_reducer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/word_count/wordcount_reducer.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/word_count/wordcount_reducer.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/word_count/wordcount_mapper.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/word_count/wordcount_mapper.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/word_count/wordcount_mapper.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_reduce1.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_reduce1.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_reduce1.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_map2.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_map2.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_map2.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_map1.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_map1.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tfidf_map1.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tf_reduce.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tf_reduce.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tf_reduce.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tf_map.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tf_map.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/tf_map.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/sort.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/sort.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/sort.py --- ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/sort.py (original) +++ ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/sort.py (refactored) @@ -11,4 +11,4 @@ li.sort() for e in li: - print e, + print(e, end=' ') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/idf_reduce.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/idf_reduce.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/idf_reduce.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/idf_map.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/idf_map.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/tf_idf/idf_map.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/value_aggregater.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/value_aggregater.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/value_aggregater.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/swap_mapper.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/swap_mapper.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/swap_mapper.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/similiar_name_reducer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/similiar_name_reducer.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/similiar_name_reducer.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/name_mapper2.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/name_mapper2.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/name_mapper2.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/name_mapper1.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/name_mapper1.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/name_similarity/name_mapper1.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/util.py --- ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/util.py (refactored) @@ -31,7 +31,7 @@ return s else: - raise ValueError, "The first parameter must be a tuple" + raise ValueError("The first parameter must be a tuple") def str2tuple(s, separator = ' '): """ @@ -55,7 +55,7 @@ t = s.strip().split(separator) return tuple(t) else: - raise ValueError, "the first parameter must be a string" + raise ValueError("the first parameter must be a string") if __name__ == "__main__": + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/reducer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/reducer.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/reducer.py --- ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/reducer.py (original) +++ ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/reducer.py (refactored) @@ -1,8 +1,8 @@ from itertools import groupby from operator import itemgetter -from inputformat import KeyValueInput -from outputcollector import LineOutput +from .inputformat import KeyValueInput +from .outputcollector import LineOutput class ReducerBase: """ @@ -44,7 +44,7 @@ """ for key, group in groupby(data, itemgetter(0)): - values = map(itemgetter(1), group) + values = list(map(itemgetter(1), group)) yield key, values def reduce(self, key, values): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/outputcollector.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/outputcollector.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/outputcollector.py --- ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/outputcollector.py (original) +++ ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/outputcollector.py (refactored) @@ -20,4 +20,4 @@ keystr = str(key) valuestr = str(value) - print '%s%s%s' % (keystr, separator, valuestr) + print('%s%s%s' % (keystr, separator, valuestr)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/mapper.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/mapper.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/mapper.py --- ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/mapper.py (original) +++ ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/mapper.py (refactored) @@ -1,5 +1,5 @@ -from inputformat import TextLineInput -from outputcollector import LineOutput +from .inputformat import TextLineInput +from .outputcollector import LineOutput class MapperBase: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/inputformat.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/inputformat.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/inputformat.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/hadooplib/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/EM/runStreaming.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/hadoop/EM/runStreaming.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/EM/runStreaming.py --- ../python3/nltk_contrib/nltk_contrib/hadoop/EM/runStreaming.py (original) +++ ../python3/nltk_contrib/nltk_contrib/hadoop/EM/runStreaming.py (refactored) @@ -14,8 +14,8 @@ # while not converged or not reach maximum iteration number while (abs(newlog - oldlog) > diff and i <= iter): - print "oldlog", oldlog - print "newlog", newlog + print("oldlog", oldlog) + print("newlog", newlog) i += 1 oldlog = newlog @@ -25,7 +25,7 @@ userdir = '/home/mxf/nltknew/nltk_contrib/hadoop/EM/' p = Popen([userdir + 'runStreaming.sh' ], shell=True, stdout=sys.stdout) p.wait() - print "returncode", p.returncode + print("returncode", p.returncode) # open the parameter output from finished iteration # and get the new loglikelihood @@ -36,5 +36,5 @@ newlog = float(li[1]) f.close() -print "oldlog", oldlog -print "newlog", newlog +print("oldlog", oldlog) +print("newlog", newlog) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_reducer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_reducer.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_reducer.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_mapper.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_mapper.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_mapper.py --- ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_mapper.py (original) +++ ../python3/nltk_contrib/nltk_contrib/hadoop/EM/EM_mapper.py (refactored) @@ -63,8 +63,8 @@ # get initial state probability p (state) Pi = DictionaryProbDist(d) - A_keys = A.keys() - B_keys = B.keys() + A_keys = list(A.keys()) + B_keys = list(B.keys()) states = set() symbols = set() for e in A_keys: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/util.py --- ../python3/nltk_contrib/nltk_contrib/fuf/util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/util.py (refactored) @@ -37,7 +37,7 @@ """ def draw_helper(output, fstruct, pcount, ccount): output += 'fs%d [label=" " style="filled" fillcolor="white"];\n' % (pcount) - for fs, val in fstruct.items(): + for fs, val in list(fstruct.items()): if isinstance(val, nltk.FeatStruct): output += 'fs%d -> fs%d [label="%s"];\n' % (pcount, ccount, fs) output, ccount = draw_helper(output, val, ccount, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/statemachine.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/statemachine.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/statemachine.py --- ../python3/nltk_contrib/nltk_contrib/fuf/statemachine.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/statemachine.py (refactored) @@ -47,7 +47,7 @@ new_node(tokens) break elif new_node not in self.nodes: - raise RuntimeErrror, "Invalid target %s", new_state + raise RuntimeErrror("Invalid target %s").with_traceback(new_state) else: node = new_node + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/specialfs.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/specialfs.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/specialfs.py --- ../python3/nltk_contrib/nltk_contrib/fuf/specialfs.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/specialfs.py (refactored) @@ -2,7 +2,7 @@ Handling for special feature names during parsing """ -from sexp import * +from .sexp import * def parse_alt(sexpl): """ @@ -17,7 +17,7 @@ feat, name, index, val = ('', '', '', '') # named alt - if isinstance(sexpl[1], basestring): + if isinstance(sexpl[1], str): # alt with index if len(sexpl) == 4: feat, name, index, val = sexpl + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/sexp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/sexp.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/sexp.py --- ../python3/nltk_contrib/nltk_contrib/fuf/sexp.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/sexp.py (refactored) @@ -6,7 +6,7 @@ import os -from statemachine import PushDownMachine +from .statemachine import PushDownMachine class SexpList(list): """ @@ -39,7 +39,7 @@ for i, val in enumerate(self): if isinstance(val, SexpList): s += val.pp() - elif isinstance(val, basestring): + elif isinstance(val, str): s += val else: s += repr(val) @@ -71,8 +71,8 @@ # set up the parenthesis self.parens = {'(':')', '[':']', '{':'}'} - self.lparens = self.parens.keys() - self.rparens = self.parens.values() + self.lparens = list(self.parens.keys()) + self.rparens = list(self.parens.values()) self._build_machine() self.machine.stack = [[]] @@ -90,8 +90,8 @@ """ Return a tokenizer """ - lparen_res = ''.join([re.escape(lparen) for lparen in self.parens.keys()]) - rparen_res = ''.join([re.escape(rparen) for rparen in self.parens.values()]) + lparen_res = ''.join([re.escape(lparen) for lparen in list(self.parens.keys())]) + rparen_res = ''.join([re.escape(rparen) for rparen in list(self.parens.values())]) tok_re = re.compile('[%s]|[%s]|[^%s%s\s]+' % (lparen_res, rparen_res, lparen_res, rparen_res)) @@ -239,18 +239,18 @@ lines = open('tests/sexp.txt').readlines() for test in lines: try: - print '%s' % test + print('%s' % test) l = SexpListParser().parse(test) - print '==>', SexpListParser().parse(test) - print - except Exception, e: - print 'Exception:', e + print('==>', SexpListParser().parse(test)) + print() + except Exception as e: + print('Exception:', e) # testing the SexpFileParser sfp = SexpFileParser('tests/typed_gr4.fuf') - print sfp.parse() - - - - - + print(sfp.parse()) + + + + + + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/morphology.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/morphology.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/morphology.py --- ../python3/nltk_contrib/nltk_contrib/fuf/morphology.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/morphology.py (refactored) @@ -4,7 +4,7 @@ - morph_numeric: integer number to text """ -import lexicon +from . import lexicon def _is_vowel(char): return char in ['o', 'e', 'i', 'a', 'y'] @@ -24,7 +24,7 @@ """ assert word - assert isinstance(word, basestring) + assert isinstance(word, str) assert len(word) > 0 second_last = word[-2] @@ -90,7 +90,7 @@ last = word[-1] assert word - assert isinstance(word, basestring) + assert isinstance(word, str) if last == 'e': return word + 'd' @@ -132,7 +132,7 @@ Forms the suffix for the present tense of the verb WORD """ assert word - assert isinstance(word, basestring) + assert isinstance(word, str) if _is_first_person(person) or _is_second_person(person): return word elif _is_third_person(person): @@ -253,7 +253,7 @@ """ Returns the correct pronoun given the features """ - if lex and isinstance(lex, basestring) and not (lex in ['none', 'nil']): + if lex and isinstance(lex, str) and not (lex in ['none', 'nil']): return lex if pronoun_type == 'personal': # start with the 'he' then augmen by person, then, by number, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/link.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/link.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/link.py --- ../python3/nltk_contrib/nltk_contrib/fuf/link.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/link.py (refactored) @@ -80,7 +80,7 @@ def resolve_helper(fs, ancestors): # start looking for links - for feat, val in fs.items(): + for feat, val in list(fs.items()): # add to path and recurse if isinstance(val, nltk.FeatStruct): ancestors.append(val) @@ -144,8 +144,8 @@ if __name__ == '__main__': # testing the link resolution using gr0.fuf grammar and ir0.fuf inputs import os - from fufconvert import * - from fuf import * + from .fufconvert import * + from .fuf import * gfs = fuf_to_featstruct(open('tests/gr0.fuf').read()) itext = open('tests/ir0.fuf').readlines()[2] @@ -153,4 +153,4 @@ ifs = fuf_to_featstruct(itext) result = unify_with_grammar(ifs, gfs) - print output_html([ifs, gfs, result]) + print(output_html([ifs, gfs, result])) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/linearizer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/linearizer.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/linearizer.py --- ../python3/nltk_contrib/nltk_contrib/fuf/linearizer.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/linearizer.py (refactored) @@ -4,8 +4,8 @@ """ import nltk -from link import * -from util import output_html +from .link import * +from .util import output_html def linearize(fstruct): """ @@ -25,9 +25,9 @@ else: if isinstance(fs[item], ReentranceLink): LinkResolver().resolve(fs) - if fs[item].has_key('pattern'): + if 'pattern' in fs[item]: lin_helper(fs[item], fs[item]['pattern'], output) - elif fs[item].has_key('lex'): + elif 'lex' in fs[item]: output.append(fs[item]['lex']) assert isinstance(fstruct, nltk.FeatStruct) @@ -37,15 +37,15 @@ return output if __name__ == '__main__': - from fufconvert import * - from fuf import * + from .fufconvert import * + from .fuf import * gfs = fuf_to_featstruct(open('tests/gr0.fuf').read()) itext = open('tests/ir0.fuf').readlines()[3] ifs = fuf_to_featstruct(itext) result = unify_with_grammar(ifs, gfs) - print result - print linearize(result) + print(result) + print(linearize(result)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/lexicon.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/fuf/lexicon.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/lexicon.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/fufconvert.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/fufconvert.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/fufconvert.py --- ../python3/nltk_contrib/nltk_contrib/fuf/fufconvert.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/fufconvert.py (refactored) @@ -1,10 +1,10 @@ import re import os import nltk -from sexp import * -from link import * -from specialfs import * -from fstypes import * +from .sexp import * +from .link import * +from .specialfs import * +from .fstypes import * def fuf_to_featstruct(fuf): """ @@ -23,7 +23,7 @@ assert sexp.lparen == '(' fs = nltk.FeatStruct() for child in sexp: - if isinstance(child, basestring): + if isinstance(child, str): feat, val = _convert_fuf_feature(sexp) fs[feat] = val break @@ -55,11 +55,11 @@ del sexp[1] result = _list_convert(sexp[1]) sexp[1] = result - print sexp[1] + print(sexp[1]) feat, val = sexp else: assert len(sexp) == 2, sexp[1] - assert isinstance(sexp[0], basestring), sexp + assert isinstance(sexp[0], str), sexp feat, val = sexp # Special handling for pattern feature @@ -72,7 +72,7 @@ assert isinstance(val, SexpList) and val.lparen == '(' choices = list() for c in val: - if isinstance(c, basestring): + if isinstance(c, str): choices.append(c) else: choices.append(_convert_fuf_featstruct(c)) @@ -124,7 +124,7 @@ # process the type defs and the grammar for sexp in lsexp: - if isinstance(sexp[0], basestring) and sexp[0] == 'define-feature-type': + if isinstance(sexp[0], str) and sexp[0] == 'define-feature-type': assert len(sexp) == 3 name, children = sexp[1], sexp[2] type_table.define_type(name, children) @@ -166,7 +166,7 @@ #test the alt feature - print 'START LIST TEST' + print('START LIST TEST') #listlines = open('tests/list.fuf').readlines() #for line in listlines: #print 'INPUTS:', line @@ -198,19 +198,19 @@ # test the example grammars grammar_files = [gfile for gfile in os.listdir('tests/') if gfile.startswith('gr')] - print grammar_files + print(grammar_files) for gfile in grammar_files: - print "FILE: %s" % gfile + print("FILE: %s" % gfile) text = open('tests/%s' % gfile).read() - print text - print fuf_to_featstruct(text) - print + print(text) + print(fuf_to_featstruct(text)) + print() 1/0 type_table, grammar = fuf_file_to_featstruct('tests/typed_gr4.fuf') - print type_table - print grammar + print(type_table) + print(grammar) gr5 = fuf_to_featstruct(open('tests/gr5.fuf').read()) - print gr5 + print(gr5) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/fuf.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/fuf.py --- ../python3/nltk_contrib/nltk_contrib/fuf/fuf.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/fuf.py (refactored) @@ -1,10 +1,10 @@ import os import nltk -from fufconvert import * -from link import * -from linearizer import * -from util import output_html, flatten +from .fufconvert import * +from .link import * +from .linearizer import * +from .util import output_html, flatten class GrammarPathResolver(object): @@ -41,7 +41,7 @@ alts = list() fs = nltk.FeatStruct() - for gkey, gvalue in grammar.items(): + for gkey, gvalue in list(grammar.items()): if gkey != "alt" and not gkey.startswith("alt_"): #if isinstance(gvalue, basestring): fs[gkey] = gvalue @@ -63,7 +63,7 @@ @return: list """ - altkeys = fs[altname].keys() + altkeys = list(fs[altname].keys()) altkeys = sorted([int(key) for key in altkeys if key != "_index_"], cmp) altkeys = [str(key) for key in altkeys] @@ -107,7 +107,7 @@ """ if isinstance(pack, list): for subpack in pack: - for fkey, fvalue in fs.items(): + for fkey, fvalue in list(fs.items()): if (fkey in subpack) and \ GrammarPathResolver._is_subsumed_val(table, fs, fkey, subpack): pass @@ -120,7 +120,7 @@ subpack[fkey] = fvalue else: assert isinstance(pack, nltk.FeatStruct) - for fkey, fvalue in fs.items(): + for fkey, fvalue in list(fs.items()): if (fkey in pack) and \ GrammarPathResolver._is_subsumed_val(table, fs, fkey, pack): pass @@ -138,7 +138,7 @@ path through the alternations. """ - if isinstance(fstruct, basestring): + if isinstance(fstruct, str): return fstruct fs, alts = GrammarPathResolver.filter_for_alt(fstruct) @@ -148,7 +148,7 @@ toplevel_pack = GrammarPathResolver.alt_to_list(fstruct, altname) subpack = list() for item in toplevel_pack: - if isinstance(item, nltk.FeatStruct) and len(item.keys()) == 0: + if isinstance(item, nltk.FeatStruct) and len(list(item.keys())) == 0: # empty feature - result of having opts pass elif isinstance(item, nltk.FeatStruct): @@ -162,7 +162,7 @@ return result else: total_packs = list() - for fkey, fvalue in fstruct.items(): + for fkey, fvalue in list(fstruct.items()): if isinstance(fvalue, nltk.FeatStruct): subpack = list() fs, alts = GrammarPathResolver.filter_for_alt(fvalue) @@ -170,7 +170,7 @@ for item in self.resolve(fvalue): newfs = nltk.FeatStruct() newfs[fkey] = item - for key, value in fvalue.items(): + for key, value in list(fvalue.items()): if not ('alt' in value): newfs[key] = value subpack.append(newfs) @@ -319,7 +319,7 @@ return True if ('pattern' in fstruct): - for fkey in subfs_val.keys(): + for fkey in list(subfs_val.keys()): if fkey in fstruct['pattern']: return True return False @@ -332,7 +332,7 @@ unifs = fs.unify(gr) if unifs: resolver.resolve(unifs) - for fname, fval in unifs.items(): + for fname, fval in list(unifs.items()): if Unifier._isconstituent(unifs, fname, fval): newval = Unifier._unify(fval, grs, resolver) if newval: @@ -366,24 +36RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/fuf.py 6,24 @@ input_files = ['ir2.fuf'] for ifile, gfile in zip(input_files, grammar_files): if ifile == 'ir3.fuf' and gfile == 'gr3.fuf': - print 'gr3.fuf doesn\'t work because of the (* focus) s-expression in the feature structure' + print('gr3.fuf doesn\'t work because of the (* focus) s-expression in the feature structure') continue # input files contain more than one definition of input output = None result = None - print "\nINPUT FILE: %s, GRAMMAR FILE: %s" % (ifile, gfile) + print("\nINPUT FILE: %s, GRAMMAR FILE: %s" % (ifile, gfile)) gfs = fuf_to_featstruct(open('tests/%s' % gfile).read()) for i, iline in enumerate(open('tests/%s' % ifile).readlines()): try: ifs = fuf_to_featstruct(iline) - except Exception, e: - print 'Failed to convert %s to nltk.FeatStruct' % iline + except Exception as e: + print('Failed to convert %s to nltk.FeatStruct' % iline) exit() fuf = Unifier(ifs, gfs) result = fuf.unify() if result: output = " ".join(linearize(result)) - print output_html([ifs, gfs, result, output]) - print i, "result:", output + print(output_html([ifs, gfs, result, output])) + print(i, "result:", output) else: - print i, 'result: failed' + print(i, 'result: failed') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/fstypes.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/fstypes.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/fstypes.py --- ../python3/nltk_contrib/nltk_contrib/fuf/fstypes.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/fstypes.py (refactored) @@ -2,7 +2,7 @@ C{fstypes.py} module contains the implementation of feature value types as defined in the FUF manual (v5.2) """ -from sexp import * +from .sexp import * from nltk.featstruct import CustomFeatureValue, UnificationFailure class FeatureTypeTable(object): @@ -28,9 +28,9 @@ @type children: single string or list of strings """ - if name not in self.table.keys(): + if name not in list(self.table.keys()): self.table[name] = [] - if isinstance(children, basestring): + if isinstance(children, str): children = [children] for child in children: self.table[name].append(child) @@ -48,14 +48,14 @@ # quick check if the specialization is the immediate one spec = specialization if name == spec: return True - if not self.table.has_key(name): return False + if name not in self.table: return False if spec in self.table[name]: return True return any(self.subsume(item, spec) for item in self.table[name]) def __repr__(self): output = "" - for key, value in self.table.items(): + for key, value in list(self.table.items()): output += "%s <--- %s\n" % (key, value) return output @@ -141,16 +141,16 @@ """ def assign_types_helper(fs, type_table, flat_type_table): # go through the feature structure and convert the typed values - for fkey, fval in fs.items(): + for fkey, fval in list(fs.items()): if isinstance(fval, nltk.FeatStruct): assign_types_helper(fval, type_table, flat_type_table) - elif isinstance(fval, basestring) and (fval in flat_type_table): + elif isinstance(fval, str) and (fval in flat_type_table): newval = TypedFeatureValue(fval, table) fs[fkey] = newval # flattten the table flat_type_table = list() - for tkey, tvalue in table.table.items(): + for tkey, tvalue in list(table.table.items()): flat_type_table.append(tkey) for tval in tvalue: flat_type_table.append(tval) @@ -165,9 +165,9 @@ sexp = SexpListParser().parse(typedef) type_table.define_type(sexp[1], sexp[2]) - print type_table - print type_table.subsume('np', 'common') - print type_table.subsume('mood', 'imperative') + print(type_table) + print(type_table.subsume('np', 'common')) + print(type_table.subsume('mood', 'imperative')) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fuf/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fuf/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fuf/__init__.py --- ../python3/nltk_contrib/nltk_contrib/fuf/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fuf/__init__.py (refactored) @@ -49,12 +49,12 @@ syntax to C{nltk.featstruct.FeatStruct}. """ -from fufconvert import * -from fuf import * -from linearizer import * -from fstypes import * -from link import * -from util import * +from .fufconvert import * +from .fuf import * +from .linearizer import * +from .fstypes import * +from .link import * +from .util import * __all__ = [ # Unifier + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fst/fst2.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fst/fst2.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fst/fst2.py --- ../python3/nltk_contrib/nltk_contrib/fst/fst2.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fst/fst2.py (refactored) @@ -965,8 +965,8 @@ uniqueArcs[(src,dst)] += [(in_str,out_str)] else: uniqueArcs[(src,dst)] = [(in_str,out_str)] - ratio = float(len(uniqueArcs.keys())) / float(stateCount) - for src,dst in uniqueArcs.keys(): + ratio = float(len(list(uniqueArcs.keys()))) / float(stateCount) + for src,dst in list(uniqueArcs.keys()): uniqueArcs[(src,dst)].sort() sortedArcs = FST.mergeRuns(uniqueArcs[(src,dst)],minRun) label = "" @@ -1467,7 +1467,7 @@ if self.stepper is None: return # Perform one step. - try: result, val = self.stepper.next() + try: result, val = next(self.stepper) except StopIteration: return if result == 'fail': @@ -1481,7 +1481,7 @@ self.out_text.insert('end', ' (Finished!)') elif result == 'backtrack': self.out_text.insert('end', ' (Backtrack)') - for state, widget in self.graph.state_widgets.items(): + for state, widget in list(self.graph.state_widgets.items()): if state == val: self.graph.mark_state(state, '#f0b0b0') else: self.graph.unmark_state(state) else: @@ -1512,7 +1512,7 @@ self.state_descr.insert('end', state_descr or '') # Highlight the new dst state. - for state, widget in self.graph.state_widgets.items(): + for state, widget in list(self.graph.state_widgets.items()): if state == fst.dst(arc): self.graph.mark_state(state, '#00ff00') elif state == fst.src(arc): @@ -1520,7 +1520,7 @@ else: self.graph.unmark_state(state) # Highlight the new arc. - for a, widget in self.graph.arc_widgets.items(): + for a, widget in list(self.graph.arc_widgets.items()): if a == arc: self.graph.mark_arc(a) else: self.graph.unmark_arc(a) @@ -1571,11 +1571,11 @@ end -> """) - print "john eats the bread ->" - print ' '+ ' '.join(fst.transduce("john eats the bread".split())) + print("john eats the bread ->") + print(' '+ ' '.join(fst.transduce("john eats the bread".split()))) rev = fst.inverted() - print "la vache mange de l'herbe ->" - print ' '+' '.join(rev.transduce("la vache mange de l'herbe".split())) + print("la vache mange de l'herbe ->") + print(' '+' '.join(rev.transduce("la vache mange de l'herbe".split()))) demo = FSTDemo(fst) demo.transduce("the cow eats the bread".split()) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fst/fst.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fst/fst.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fst/fst.py --- ../python3/nltk_contrib/nltk_contrib/fst/fst.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fst/fst.py (refactored) @@ -1363,7 +1363,7 @@ if self.stepper is None: return # Perform one step. - try: result, val = self.stepper.next() + try: result, val = next(self.stepper) except StopIteration: return if result == 'fail': @@ -1377,7 +1377,7 @@ self.out_text.insert('end', ' (Finished!)') elif result == 'backtrack': self.out_text.insert('end', ' (Backtrack)') - for state, widget in self.graph.state_widgets.items(): + for state, widget in list(self.graph.state_widgets.items()): if state == val: self.graph.mark_state(state, '#f0b0b0') else: self.graph.unmark_state(state) else: @@ -1408,7 +1408,7 @@ self.state_descr.insert('end', state_descr or '') # Highlight the new dst state. - for state, widget in self.graph.state_widgets.items(): + for state, widget in list(self.graph.state_widgets.items()): if state == fst.dst(arc): self.graph.mark_state(state, '#00ff00') elif state == fst.src(arc): @@ -1416,7 +1416,7 @@ else: self.graph.unmark_state(state) # Highlight the new arc. - for a, widget in self.graph.arc_widgets.items(): + for a, widget in list(self.graph.arc_widgets.items()): if a == arc: self.graph.mark_arc(a) else: self.graph.unmark_arc(a) @@ -1467,11 +1467,11 @@ end -> """) - print "john eats the bread ->" - print ' '+ ' '.join(fst.transduce("john eats the bread".split())) + print("john eats the bread ->") + print(' '+ ' '.join(fst.transduce("john eats the bread".split()))) rev = fst.inverted() - print "la vache mange de l'herbe ->" - print ' '+' '.join(rev.transduce("la vache mange de l'herbe".split())) + print("la vache mange de l'herbe ->") + print(' '+' '.join(rev.transduce("la vache mange de l'herbe".split()))) demo = FSTDemo(fst) demo.transduce("the cow eats the bread".split()) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fst/draw_graph.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/fst/draw_graph.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fst/draw_graph.py --- ../python3/nltk_contrib/nltk_contrib/fst/draw_graph.py (original) +++ ../python3/nltk_contrib/nltk_contrib/fst/draw_graph.py (refactored) @@ -133,7 +133,9 @@ labely = (y1+y2)*0.5 - (x2-x1)*(self._curve/2 + 8/r) return (int(labelx), int(labely)) - def _line_coords(self, (startx, starty), (endx, endy)): + def _line_coords(self, xxx_todo_changeme, xxx_todo_changeme1): + (startx, starty) = xxx_todo_changeme + (endx, endy) = xxx_todo_changeme1 (x1, y1) = int(startx), int(starty) (x2, y2) = int(endx), int(endy) radius1 = 0 @@ -253,7 +255,7 @@ Remove an edge from the graph (but don't destroy it). @type edge: L{GraphEdgeWidget} """ - print 'remove', edge + print('remove', edge) # Get the edge's start & end nodes. start, end = self._startnode[edge], self._endnode[edge] @@ -315,9 +317,9 @@ """ Remove a node from the graph, and destroy the node. """ - print 'removing', node + print('removing', node) for widget in self.remove_node(node): - print 'destroying', widget + print('destroying', widget) widget.destroy() def _tags(self): return [] @@ -467,7 +469,7 @@ while len(nodes) > 0: best = (None, None, -1) # node, position, score. for pos in range(len(scores)): - for (node, score) in scores[pos].items(): + for (node, score) in list(scores[pos].items()): if (score > best[2] and level[pos] is None and node in nodes): best = (node, pos, score) @@ -526,9 +528,9 @@ """ How many *unexpanded* nodes can be reached from the given node? """ - if self._nodelevel.has_key(node): return 0 + if node in self._nodelevel: return 0 if reached is None: reached = {} - if not reached.has_key(node): + if node not in reached: reached[node] = 1 for edge in self._outedges.get(node, []): self._reachable(self._endnode[edge], reached) @@ -551,14 +553,14 @@ if levelnum >= len(self._levels): self._levels.append([]) for parent_node in parent_level: # Add the parent node - if not self._nodelevel.has_key(parent_node): + if parent_node not in self._nodelevel: self._levels[levelnum-1].append(parent_node) self._nodelevel[parent_node] = levelnum-1 # Recurse to its children child_nodes = [self._endnode[edge] for edge in self._outedges.get(parent_node, []) - if not self._nodelevel.has_key(self._endnode[edge])] + if self._endnode[edge] not in self._nodelevel] if len(child_nodes) > 0: self._add_descendants_dfs(child_nodes, levelnum+1) @@ -569,7 +571,7 @@ child_nodes = [self._endnode[edge] for edge in self._outedges.get(parent_node, [])] for node in child_nodes: - if not self._nodelevel.has_key(node): + if node not in self._nodelevel: self._levels[levelnum].append(node) self._nodelevel[node] = levelnum frontier_nodes.append(node) @@ -585,7 +587,7 @@ child_nodes += [self._startnode[edge] for edge in self._inedges.get(parent_node, [])] for node in child_nodes: - if not self._nodelevel.has_key(node): + if node not in self._nodelevel: self._levels[levelnum].append(node) self._nodelevel[node] = levelnum frontier_nodes.append(node) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/fst/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/fst/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/fst/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/featuredemo.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/featuredemo.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/featuredemo.py --- ../python3/nltk_contrib/nltk_contrib/featuredemo.py (original) +++ ../python3/nltk_contrib/nltk_contrib/featuredemo.py (refactored) @@ -13,7 +13,7 @@ def text_parse(grammar, sent, trace=2, drawtrees=False, latex=False): parser = grammar.earley_parser(trace=trace) - print parser._grammar + print(parser._grammar) tokens = sent.split() trees = parser.get_parse_list(tokens) if drawtrees: @@ -21,8 +21,8 @@ TreeView(trees) else: for tree in trees: - if latex: print tree.latex_qtree() - else: print tree + if latex: print(tree.latex_qtree()) + else: print(tree) def main(): import sys @@ -83,7 +83,7 @@ sentence = line.strip() if sentence == '': continue if sentence[0] == '#': continue - print "Sentence: %s" % sentence + print("Sentence: %s" % sentence) text_parse(grammar, sentence, trace, False, options.latex) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/dependency/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/dependency/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/dependency/util.py --- ../python3/nltk_contrib/nltk_contrib/dependency/util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/dependency/util.py (refactored) @@ -5,7 +5,7 @@ from nltk import tokenize from itertools import islice import os -from deptree import DepGraph +from .deptree import DepGraph from nltk.stem.wordnet import WordNetLemmatizer def tag2tab(s, sep='/'): @@ -60,8 +60,8 @@ assert depgraph_input, 'depgraph_input is empty' if verbose: - print 'Begin DepGraph creation' - print 'depgraph_input=\n%s' % depgraph_input + print('Begin DepGraph creation') + print('depgraph_input=\n%s' % depgraph_input) return DepGraph().read(depgraph_input) @@ -79,7 +79,7 @@ #s = '' for sent in islice(tabtagged(), 3): for line in sent: - print line, + print(line, end=' ') #s += ''.join(sent) #print >>f, s #f.close() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/dependency/ptbconv.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/dependency/ptbconv.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/dependency/ptbconv.py --- ../python3/nltk_contrib/nltk_contrib/dependency/ptbconv.py (original) +++ ../python3/nltk_contrib/nltk_contrib/dependency/ptbconv.py (refactored) @@ -18,6 +18,7 @@ import math from nltk.internals import find_binary import os +from functools import reduce OUTPUT_FORMAT = '%s\t%s\t_\t%s\t_\t_\t%s\t%s\t_\t_\n' @@ -84,7 +85,7 @@ (stdout, stderr) = p.communicate() if verbose: - print stderr.strip() + print(stderr.strip()) return stdout @@ -94,10 +95,10 @@ [os.environ['NLTK_DATA'], 'corpora', 'treebank']) def convert_all(): - for i in xrange(199): - print '%s:' % (i+1), + for i in range(199): + print('%s:' % (i+1), end=' ') convert(i+1, 'D', True, True) if __name__ == '__main__': - print convert(1, 'D') + print(convert(1, 'D')) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/dependency/deptree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/dependency/deptree.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/dependency/deptree.py --- ../python3/nltk_contrib/nltk_contrib/dependency/deptree.py (original) +++ ../python3/nltk_contrib/nltk_contrib/dependency/deptree.py (refactored) @@ -35,7 +35,7 @@ def __str__(self): # return '\n'.join([str(n) for n in self.nodelist]) - return '\n'.join([', '.join(['%s: %15s'%item for item in n.iteritems()]) for n in self.nodelist]) + return '\n'.join([', '.join(['%s: %15s'%item for item in n.items()]) for n in self.nodelist]) def load(self, file): """ @@ -151,7 +151,7 @@ labeled directed graph. @rtype: C{XDigraph} """ - nx_nodelist = range(1, len(self.nodelist)) + nx_nodelist = list(range(1, len(self.nodelist))) nx_edgelist = [(n, self._hd(n), self._rel(n)) for n in nx_nodelist if self._hd(n)] self.nx_labels = {} @@ -191,7 +191,7 @@ . . 9 VMOD """) tree = dg.deptree() - print tree.pprint() + print(tree.pprint()) if nx: #currently doesn't work try: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/dependency/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/dependency/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/dependency/__init__.py --- ../python3/nltk_contrib/nltk_contrib/dependency/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/dependency/__init__.py (refactored) @@ -4,4 +4,4 @@ # URL: # For license information, see LICENSE.TXT -from deptree import * +from .deptree import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/util.py --- ../python3/nltk_contrib/nltk_contrib/coref/util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/util.py (refactored) @@ -8,14 +8,14 @@ import time try: - import cPickle as pickle + import pickle as pickle except: import pickle try: - from cStringIO import StringIO + from io import StringIO except: - from StringIO import StringIO + from io import StringIO from nltk.data import load, find from nltk.corpus import CorpusReader, BracketParseCorpusReader @@ -114,7 +114,7 @@ def zipzip(*lists): - return LazyMap(lambda lst: zip(*lst), LazyZip(*lists)) + return LazyMap(lambda lst: list(zip(*lst)), LazyZip(*lists)) def load_treebank(sections): treebank_path = os.environ.get('NLTK_TREEBANK', 'treebank/combined') @@ -133,16 +133,16 @@ 'state_union', PlaintextCorpusReader, r'(?!\.svn).*\.txt') state_union = TreebankTaggerCorpusReader(state_union) - print 'Treebank tagger demo...' - print 'Tagged sentences:' + print('Treebank tagger demo...') + print('Tagged sentences:') for sent in state_union.tagged_sents()[500:505]: - print sent - print - print - print 'Tagged words:' + print(sent) + print() + print() + print('Tagged words:') for word in state_union.tagged_words()[500:505]: - print word - print + print(word) + print() def treebank_chunk_tagger_demo(): from nltk.corpus.util import LazyCorpusLoader @@ -153,17 +153,17 @@ 'state_union', PlaintextCorpusReader, r'(?!\.svn).*\.txt') state_union = TreebankChunkTaggerCorpusReader(state_union) - print 'Treebank chunker demo...' - print 'Chunked sentences:' + print('Treebank chunker demo...') + print('Chunked sentences:') for sent in state_union.chunked_sents()[500:505]: - print sent - print - print - print 'Parsed sentences:' + print(sent) + print() + print() + print('Parsed sentences:') for tree in state_union.parsed_sents()[500:505]: - print tree - print - print + print(tree) + print() + print() def muc6_chunk_tagger_demo(): from nltk.corpus.util import LazyCorpusLoader @@ -172,12 +172,12 @@ treebank = MUC6NamedEntityChunkTaggerCorpusReader(load_treebank('0[12]')) - print 'MUC6 named entity chunker demo...' - print 'Chunked sentences:' + print('MUC6 named entity chunker demo...') + print('Chunked sentences:') for sent in treebank.chunked_sents()[:10]: - print sent - print - print + print(sent) + print() + print() def baseline_chunk_tagger_demo(): from nltk.corpus.util import LazyCorpusLoader @@ -186,16 +186,16 @@ chunker = BaselineNamedEntityChunkTagger() treebank = load_treebank('0[12]') - print 'Baseline named entity chunker demo...' - print 'Chunked sentences:' + print('Baseline named entity chunker demo...') + print('Chunked sentences:') for sent in treebank.sents()[:10]: - print chunker.chunk(sent) - print - print 'IOB-tagged sentences:' + print(chunker.chunk(sent)) + print() + print('IOB-tagged sentences:') for sent in treebank.sents()[:10]: - print chunker.tag(sent) - print - print + print(chunker.tag(sent)) + print() + print() def demo(): from nltk_contrib.coref.util import treebank_tagger_demo, \ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/train.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/train.py --- ../python3/nltk_contrib/nltk_contrib/coref/train.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/train.py (refactored) @@ -19,22 +19,22 @@ from nltk_contrib.coref.data import BufferedGzipFile try: - import cPickle as pickle + import pickle as pickle except: import pickle try: - from cStringIO import StringIO + from io import StringIO except: - from StringIO import StringIO + from io import StringIO class LidstoneProbDistFactory(LidstoneProbDist): def __init__(self, fd, *args, **kwargs): LidstoneProbDist.__init__(self, fd, 0.01, args[-1]) samples = fd.samples() - self._probs = dict(zip([0]*len(samples), samples)) - self._logprobs = dict(zip([0]*len(samples), samples)) + self._probs = dict(list(zip([0]*len(samples), samples))) + self._logprobs = dict(list(zip([0]*len(samples), samples))) for sample in samples: self._logprobs[sample] = LidstoneProbDist.logprob(self, sample) self._probs[sample] = LidstoneProbDist.prob(self, sample) @@ -84,7 +84,7 @@ untagged_sequence = LazyMap(__untag, LazyMap(__featurize, test_sequence)) predicted_tags = LazyMap(self.classify, untagged_sequence) acc = accuracy(correct_tags, predicted_tags) - print 'accuracy over %d tokens: %.2f' % (count, acc) + print('accuracy over %d tokens: %.2f' % (count, acc)) class MaxentClassifierFactory(object): @@ -125,37 +125,37 @@ verbose or include printed output. @type verbose: C{bool} """ - print 'Training ', train_class - print 'Loading training data (supervised)...' + print('Training ', train_class) + print('Loading training data (supervised)...') labeled_sequence = labeled_sequence[:num_train_sents] sent_count = len(labeled_sequence) word_count = sum([len(sent) for sent in labeled_sequence]) - print '%s sentences' % (sent_count) - print '%s words' % (word_count) + print('%s sentences' % (sent_count)) + print('%s words' % (word_count)) - print 'Training...' + print('Training...') start = time.time() model = train_class.train(labeled_sequence, **kwargs) end = time.time() - print 'Training time: %.3fs' % (end - start) - print 'Training time per sentence: %.3fs' % (float(end - start) / sent_count) - print 'Training time per word: %.3fs' % (float(end - start) / word_count) + print('Training time: %.3fs' % (end - start)) + print('Training time per sentence: %.3fs' % (float(end - start) / sent_count)) + print('Training time per word: %.3fs' % (float(end - start) / word_count)) - print 'Loading test data...' + print('Loading test data...') test_sequence = test_sequence[:num_test_sents] sent_count = len(test_sequence) word_count = sum([len(sent) for sent in test_sequence]) - print '%s sentences' % (sent_count) - print '%s words' % (word_count) + print('%s sentences' % (sent_count)) + print('%s words' % (word_count)) try: - print 'Saving model...' + print('Saving model...') if isinstance(pickle_file, str): if pickle_file.endswith('.gz'): _open = BufferedGzipFile @@ -165,23 +165,23 @@ pickle.dump(model, stream) stream.close() model = pickle.load(_open(pickle_file, 'rb')) - print 'Model saved as %s' % pickle_file + print('Model saved as %s' % pickle_file) else: stream = StringIO() pickle.dump(model, stream) stream = StringIO(stream.getvalue()) model = pickle.load(stream) - except Exception, e: - print 'Error saving model, %s' % str(e) + except Exception as e: + print('Error saving model, %s' % str(e)) - print 'Testing...' + print('Testing...') start = time.time() model.test(test_sequence, **kwargs) end = time.time() - print 'Test time: %.3fRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/train.py s' % (end - start) - print 'Test time per sentence: %.3fs' % (float(end - start) / sent_count) - print 'Test time per word: %.3fs' % (float(end - start) / word_count) + print('Test time: %.3fs' % (end - start)) + print('Test time per sentence: %.3fs' % (float(end - start) / sent_count)) + print('Test time per word: %.3fs' % (float(end - start) / word_count)) return model + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/tag.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/tag.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/tag.py --- ../python3/nltk_contrib/nltk_contrib/coref/tag.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/tag.py (refactored) @@ -3,9 +3,9 @@ import subprocess try: - from cStringIO import StringIO + from io import StringIO except: - from StringIO import StringIO + from io import StringIO from nltk.util import LazyMap, LazyConcatenation from nltk.internals import find_binary, java @@ -48,7 +48,7 @@ def tagged_sents(self): sents = self.sents() - batch_indices = range(len(sents) / 1024 + 1) + batch_indices = list(range(len(sents) / 1024 + 1)) return LazyConcatenation(LazyMap(lambda i: self._tagger.batch_tag(sents[i * 1024: i * 1024 + 1024]), batch_indices)) @@ -67,7 +67,7 @@ def config_mxpost(mxpost_home=None): global _mxpost_classpath, _mxpost_home classpath = os.environ.get('CLASSPATH', '').split(':') - mxpost_jar = filter(lambda c: c.endswith('mxpost.jar'), classpath) + mxpost_jar = [c for c in classpath if c.endswith('mxpost.jar')] if mxpost_jar: _mxpost_home = os.path.dirname(mxpost_jar[0]) _mxpost_classpath = mxpost_jar[0] @@ -83,7 +83,7 @@ else: _mxpost_home = None _mxpost_classpath = None - raise Exception, "can't find mxpost.jar" + raise Exception("can't find mxpost.jar") def call_mxpost(classpath=None, stdin=None, stdout=None, stderr=None, blocking=False): @@ -103,14 +103,14 @@ def mxpost_parse_output(mxpost_output): result = [] mxpost_output = mxpost_output.strip() - for sent in filter(None, mxpost_output.split('\n')): - tokens = filter(None, re.split(r'\s+', sent)) + for sent in [_f for _f in mxpost_output.split('\n') if _f]: + tokens = [_f for _f in re.split(r'\s+', sent) if _f] if tokens: result.append([]) for token in tokens: m = _MXPOST_OUTPUT_RE.match(token) if not m: - raise Exception, "invalid mxpost tag pattern: %s, %s" % (token, tokens) + raise Exception("invalid mxpost tag pattern: %s, %s" % (token, tokens)) word = m.group('word') tag = m.group('tag') result[-1].append((word, tag)) @@ -122,7 +122,7 @@ p.communicate('\n'.join([' '.join(sent) for sent in sents])) rc = p.returncode if rc != 0: - raise Exception, 'exited with non-zero status %s' % rc + raise Exception('exited with non-zero status %s' % rc) if kwargs.get('verbose'): - print 'warning: %s' % stderr + print('warning: %s' % stderr) return mxpost_parse_output(stdout) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/resolve.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/resolve.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/resolve.py --- ../python3/nltk_contrib/nltk_contrib/coref/resolve.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/resolve.py (refactored) @@ -10,14 +10,14 @@ import optparse try: - import cPickle as pickle + import pickle as pickle except: import pickle try: - from cStringIO import StringIO + from io import StringIO except: - from StringIO import StringIO + from io import StringIO from nltk.util import LazyMap, LazyZip, LazyConcatenation, LazyEnumerate @@ -129,23 +129,23 @@ resolved_mentions = resolver.resolve_mentions(mentions) resolved_discourse = resolver.resolve(sents) - print 'Baseline coref resolver demo...' - print 'Mentions:' + print('Baseline coref resolver demo...') + print('Mentions:') for mention in mentions: - print mention - print - print 'Resolved mentions:' + print(mention) + print() + print('Resolved mentions:') for mention in resolved_mentions: - print mention - print - print 'Resolved discourse:' + print(mention) + print() + print('Resolved discourse:') for sent in resolved_discourse: - print sent - print - print + print(sent) + print() + print() def demo(): - print 'Demo...' + print('Demo...') baseline_coref_resolver_demo() # muc6_test = LazyCorpusLoader( # 'muc6', MUC6CorpusReader, @@ -184,7 +184,7 @@ # print if __name__ == '__main__': - print time.ctime(time.time()) + print(time.ctime(time.time())) parser = optparse.OptionParser() parser.add_option('-d', '--demo', action='store_true', dest='demo', @@ -322,9 +322,9 @@ pred_tags = model.tag(words) for x, y, z in zip(pred_tags, gold_tags, words): if x == y: - print ' ', (x, y, z) + print(' ', (x, y, z)) else: - print '* ', (x, y, z) + print('* ', (x, y, z)) elif options.train_ner == 'classifier2': muc6_train = LazyCorpusLoader( @@ -352,11 +352,11 @@ pred_tags = model.tag(words) for x, y, z in zip(pred_tags, gold_tags, words): if x == y: - print ' ', (x, y, z) + print(' ', (x, y, z)) else: - print '* ', (x, y, z) + print('* ', (x, y, z)) elif options.demo: demo() - print time.ctime(time.time()) + print(time.ctime(time.time())) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/ne.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/ne.py --- ../python3/nltk_contrib/nltk_contrib/coref/ne.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/ne.py (refactored) @@ -159,13 +159,13 @@ if window > 0 and index > 0: prev_feats = \ self.__class__(tokens, index - 1, history, window=window - 1) - for key, val in prev_feats.items(): + for key, val in list(prev_feats.items()): if not key.startswith('next_') and not key == 'word': self['prev_%s' % key] = val if window > 0 and index < len(tokens) - 1: next_feats = self.__class__(tokens, index + 1, window=window - 1) - for key, val in next_feats.items(): + for key, val in list(next_feats.items()): if not key.startswith('prev_') and not key == 'word': self['next_%s' % key] = val @@ -184,11 +184,11 @@ import doctest failed, passed = doctest.testfile('test/ne.doctest', verbose) if not verbose: - print '%d passed and %d failed.' % (failed, passed) + print('%d passed and %d failed.' % (failed, passed)) if failed == 0: - print 'Test passed.' + print('Test passed.') else: - print '***Test Failed*** %d failures.' % failed + print('***Test Failed*** %d failures.' % failed) return failed, passed _NE_CHUNK_TYPES = ('PERSON', 'LOCATION', 'ORGANIZATION', 'MONEY') @@ -199,7 +199,7 @@ import optparse try: - import cPickle as pickle + import pickle as pickle except: import pickle @@ -244,7 +244,7 @@ num_test = int(m.group('test') or 0) options.numsents = (num_train, num_test) else: - raise ValueError, "malformed argument for option -n" + raise ValueError("malformed argument for option -n") else: options.numsents = (None, None) @@ -256,10 +256,10 @@ file_test = m.group('test') options.extract = (file_train, file_test) else: - raise ValueError, "malformed argument for option -e" - - except ValueError, v: - print 'error: %s' % v.message + raise ValueError("malformed argument for option -e") + + except ValueError as v: + print('error: %s' % v.message) parser.print_help() if options.unittest: @@ -292,9 +292,9 @@ for index in range(len(tokens)): tag = tokens[index][-1] feats = feature_detector(tokens, index, history) - keys.update(feats.keys()) + keys.update(list(feats.keys())) stream.write('%s %s\n' % (tag, ' '.join(['%s=%s' % (k, re.escape(str(v))) - for k, v in feats.items()]))) + for k, v in list(feats.items())]))) history.append(tag) history = [] stream.close() @@ -306,9 +306,9 @@ for index in range(len(tokens)): tag = tokens[index][-1] feats = feature_detector(tokens, index, history) - keys.update(feats.keys()) + keys.update(list(feats.keys())) stream.write('%s %s\n' % (tag, ' '.join(['%s=%s' % (k, re.escape(str(v))) - for k, v in feats.items()]))) + for k, v in list(feats.items())]))) history.append(tag) history = [] stream.close() @@ -343,9 +343,9 @@ reader = MXPostTaggerCorpusReader(eval(options.corpus)) iob_sents = reader.iob_sents() tagged_sents = reader.tagged_sents() - corpus = LazyMap(lambda (iob_sent, tagged_sent): + corpus = LazyMap(lambda iob_sent_tagged_sent: [(iw, tt, iob) for ((iw, iob),RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/ne.py (tw, tt)) - in zip(iob_sent, tagged_sent)], + in zip(iob_sent_tagged_sent[0], iob_sent_tagged_sent[1])], LazyZip(iob_sents, tagged_sents)) else: iob_sents = eval(options.corpus).iob_sents() @@ -360,8 +360,8 @@ trainer = eval(options.trainer) if options.verbose: - print 'Training %s with %d sentences' % \ - (options.trainer, num_train) + print('Training %s with %d sentences' % \ + (options.trainer, num_train)) ner = trainer(train, feature_detector=NERChunkTaggerFeatureDetector, chunk_types=_NE_CHUNK_TYPES, @@ -382,12 +382,12 @@ stream.close() ner = pickle.load(_open(options.model, 'r')) if options.verbose: - print 'Model saved as %s' % options.model - except Exception, e: - print "error: %s" % e + print('Model saved as %s' % options.model) + except Exception as e: + print("error: %s" % e) if test: if options.verbose: - print 'Testing %s on %d sentences' % \ - (options.trainer, num_test) + print('Testing %s on %d sentences' % \ + (options.trainer, num_test)) ner.test(test, verbose=options.verbose) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/muc7.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/muc7.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/muc7.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk_contrib/nltk_contrib/coref/muc7.py ### RefactoringTool: Line 301: could not convert: raise 'Demo requires MUC-7 Corpus, set MUC7_DIR environment variable!' RefactoringTool: Python 3 does not support string exceptions --- ../python3/nltk_contrib/nltk_contrib/coref/muc7.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/muc7.py (refactored) @@ -273,21 +273,21 @@ try: reader = MUC7CorpusReader(root, file) - print 'Paragraphs for %s:' % (file) + print('Paragraphs for %s:' % (file)) for para in reader.paras(): - print ' %s' % (para) - print - print 'Sentences for %s:' % (file) + print(' %s' % (para)) + print() + print('Sentences for %s:' % (file)) for sent in reader.sents(): - print ' %s' % (sent) - print - print 'Words for %s:' % (file) + print(' %s' % (sent)) + print() + print('Words for %s:' % (file)) for word in reader.words(): - print ' %s' % (word) - print - except Exception, e: - print 'Error encountered while running demo for %s: %s' % (file, e) - print + print(' %s' % (word)) + print() + except Exception as e: + print('Error encountered while running demo for %s: %s' % (file, e)) + print() def demo(): """ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/muc.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/muc.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/muc.py --- ../python3/nltk_contrib/nltk_contrib/coref/muc.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/muc.py (refactored) @@ -99,10 +99,10 @@ # def __init__(self, text, docno=None, dateline=None, headline=''): def __init__(self, **text): self.text = None - if isinstance(text, basestring): + if isinstance(text, str): self.text = text elif isinstance(text, dict): - for key, val in text.items(): + for key, val in list(text.items()): setattr(self, key, val) else: raise @@ -154,7 +154,7 @@ """ if fileids is None: fileids = self._fileids - elif isinstance(fileids, basestring): + elif isinstance(fileids, str): fileids = [fileids] return concat([self.open(f).read() for f in fileids]) @@ -221,7 +221,7 @@ chunks.append([(word, None) for word in token[0]]) # If the token's contents is a string, append it as a # word/tag tuple. - elif isinstance(token[0], basestring): + elif isinstance(token[0], str): chunks.append((token[0], None)) # Something bad happened. else: @@ -416,7 +416,7 @@ def _read_parsed_block(self, stream): # TODO: LazyMap but StreamBackedCorpusView doesn't support # AbstractLazySequence currently. - return map(self._parse, self._read_block(stream)) + return list(map(self._parse, self._read_block(stream))) def _parse(self, doc): """ @@ -488,7 +488,7 @@ # Get the leaves. s = (tree.leaves(),) # Get the label - if isinstance(tree.node, basestring): + if isinstance(tree.node, str): node = (tree.node,) elif isinstance(tree.node, tuple): node = tree.node @@ -497,7 +497,7 @@ # Merge the leaves and the label. return s + node # If the tree is a string just convert it to a tuple. - elif isinstance(tree, basestring): + elif isinstance(tree, str): return (tree, None) # Something bad happened. else: @@ -513,7 +513,7 @@ sents[index] += sents[index + next] sents[index + next] = '' next += 1 - sents = filter(None, sents) + sents = [_f for _f in sents if _f] return sents if s: tree = Tree(top_node, []) @@ -554,7 +554,7 @@ else: stack[-1].extend(_WORD_TOKENIZER.tokenize(word)) if len(stack) != 1: - print stack + print(stack) assert len(stack) == 1 return stack[0] @@ -567,25 +567,25 @@ muc6 = LazyCorpusLoader('muc6/', MUCCorpusReader, muc6_documents) for sent in muc6.iob_sents()[:]: for word in sent: - print word - print - print + print(word) + print() + print() for sent in muc6.mentions(depth=None): for mention in sent: - print mention - if sent: print - print + print(mention) + if sent: print() + print() muc7 = LazyCorpusLoader('muc7/', MUCCorpusReader, muc7_documents) for sent in muc7.iob_sents()[:]: for word in sent: - print word - print - print + print(word) + print() + print() for sent in muc7.mentions(depth=None): for mention in sent: - print mention - if sent: print - print + print(mention) + if sent: print() + print() if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/freiburg.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/freiburg.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/freiburg.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk_contrib/nltk_contrib/coref/freiburg.py ### RefactoringTool: Line 266: could not convert: raise 'Demo requires Freiburg Corpus, set FREIBURG_DIR environment variable!' RefactoringTool: Python 3 does not support string exceptions --- ../python3/nltk_contrib/nltk_contrib/coref/freiburg.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/freiburg.py (refactored) @@ -238,21 +238,21 @@ try: reader = FreiburgCorpusReader(root, file) - print 'Paragraphs for %s:' % (file) + print('Paragraphs for %s:' % (file)) for para in reader.paras(): - print ' %s' % (para) - print - print 'Sentences for %s:' % (file) + print(' %s' % (para)) + print() + print('Sentences for %s:' % (file)) for sent in reader.sents(): - print ' %s' % (sent) - print - print 'Words for %s:' % (file) + print(' %s' % (sent)) + print() + print('Words for %s:' % (file)) for word in reader.words(): - print ' %s/%s' % (word, word.pos()) - print - except Exception, e: - print 'Error encountered while running demo for %s: %s' % (file, e) - print + print(' %s/%s' % (word, word.pos())) + print() + except Exception as e: + print('Error encountered while running demo for %s: %s' % (file, e)) + print() def demo(): """ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/features.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/features.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/features.py --- ../python3/nltk_contrib/nltk_contrib/coref/features.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/features.py (refactored) @@ -432,7 +432,7 @@ wt = word_type(word) if len(wt) == 0: wt = None if '*' in word: continue - print "%-20s\t%s" % (word, wt) + print("%-20s\t%s" % (word, wt)) if __name__ == '__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/data.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/data.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/data.py --- ../python3/nltk_contrib/nltk_contrib/coref/data.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/data.py (refactored) @@ -9,14 +9,14 @@ from gzip import GzipFile, READ as GZ_READ, WRITE as GZ_WRITE try: - import cPickle as pickle + import pickle as pickle except: import pickle try: - from cStringIO import StringIO + from io import StringIO except: - from StringIO import StringIO + from io import StringIO class BufferedGzipFile(GzipFile): """ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/chunk.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/chunk.py --- ../python3/nltk_contrib/nltk_contrib/coref/chunk.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/chunk.py (refactored) @@ -71,13 +71,13 @@ if window > 0 and index > 0: prev_feats = \ self.__class__(tokens, index - 1, history, window=window - 1) - for key, val in prev_feats.items(): + for key, val in list(prev_feats.items()): if not key.startswith('next_') and key != 'word': self['prev_%s' % key] = val if window > 0 and index < len(tokens) - 1: next_feats = self.__class__(tokens, index + 1, window=window - 1) - for key, val in next_feats.items(): + for key, val in list(next_feats.items()): if not key.startswith('prev_') and key != 'word': self['next_%s' % key] = val @@ -99,16 +99,16 @@ return self.__iob2tree(self.tag(sent)) def batch_parse(self, sents): - return map(self.__iob2tree, self.batch_tag(sents)) + return list(map(self.__iob2tree, self.batch_tag(sents))) def chunk(self, sent): return self.__tree2chunks(self.parse(sent)) def batch_chunk(self, sents): - return map(self.__tree2chunks, self.batch_parse(sents)) + return list(map(self.__tree2chunks, self.batch_parse(sents))) def __iob2tree(self, tagged_sent): - return tokens2tree(map(flatten, tagged_sent), self.chunk_types) + return tokens2tree(list(map(flatten, tagged_sent)), self.chunk_types) def __tree2chunks(self, tree): chunks = [] @@ -132,7 +132,7 @@ def train(cls, iob_sents, **kwargs): fd = kwargs.get('feature_detector', ChunkTaggerFeatureDetector) chunk_types = kwargs.get('chunk_types', _DEFAULT_CHUNK_TYPES) - train = LazyMap(lambda sent: map(unflatten, sent), iob_sents) + train = LazyMap(lambda sent: list(map(unflatten, sent)), iob_sents) chunker = cls(fd, train, NaiveBayesClassifier.train) chunker.chunk_types = chunk_types return chunker @@ -157,7 +157,7 @@ count_cutoff=count_cutoff, min_lldelta=min_lldelta, trace=trace) - train = LazyMap(lambda sent: map(unflatten, sent), iob_sents) + train = LazyMap(lambda sent: list(map(unflatten, sent)), iob_sents) chunker = cls(fd, train, __maxent_train) chunker.chunk_types = chunk_types return chunker @@ -182,7 +182,7 @@ else: trace = 0 - train = LazyMap(lambda sent: map(unflatten, sent), iob_sents) + train = LazyMap(lambda sent: list(map(unflatten, sent)), iob_sents) mallet_home = os.environ.get('MALLET_HOME', '/usr/local/mallet-0.4') nltk.classify.mallet.config_mallet(mallet_home) @@ -205,7 +205,7 @@ for token in tokens: token, tag = unflatten(token) - if isinstance(token, basestring): + if isinstance(token, str): word = token pos = None elif isinstance(token, tuple): @@ -254,32 +254,32 @@ def test_chunk_tagger(chunk_tagger, iob_sents, **kwargs): chunk_types = chunk_tagger.chunk_types - correct = map(lambda sent: tokens2tree(sent, chunk_types), iob_sents) - guesses = chunk_tagger.batch_parse(map(lambda c: c.leaves(), correct)) + correct = [tokens2tree(sent, chunk_types) for sent in iob_sents] + guesses = chunk_tagger.batch_parse([c.leaves() for c in correct]) chunkscore = ChunkScore() for c, g in zip(correct, guesses): chunkscore.score(c, g) if kwargs.get('verbose'): - guesses = chunk_tagger.batch_tag(map(lambda c: c.leaves(), correct)) + guesses = chunk_tagger.batch_tag([c.leaves() for c in correct]) correct = iob_sents - print + print() for c, g in zip(correct, guesses): - for tokc, tokg in zip(map(flatten, c), map(flatten, g)): + RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/chunk.py for tokc, tokg in zip(list(map(flatten, c)), list(map(flatten, g))): word = tokc[0] iobc = tokc[-1] iobg = tokg[-1] star = '' if iobg != iobc: star = '*' - print '%3s %20s %20s %20s' % (star, word, iobc, iobg) - print - - print 'Precision: %.2f' % chunkscore.precision() - print 'Recall: %.2f' % chunkscore.recall() - print 'Accuracy: %.2f' % chunkscore.accuracy() - print 'F-measure: %.2f' % chunkscore.f_measure() + print('%3s %20s %20s %20s' % (star, word, iobc, iobg)) + print() + + print('Precision: %.2f' % chunkscore.precision()) + print('Recall: %.2f' % chunkscore.recall()) + print('Accuracy: %.2f' % chunkscore.accuracy()) + print('F-measure: %.2f' % chunkscore.f_measure()) return chunkscore @@ -287,11 +287,11 @@ import doctest failed, tested = doctest.testfile('test/chunk.doctest', verbose) if not verbose: - print '%d passed and %d failed.' % (tested - failed, failed) + print('%d passed and %d failed.' % (tested - failed, failed)) if failed == 0: - print 'Test passed.' + print('Test passed.') else: - print '***Test Failed*** %d failures.' % failed + print('***Test Failed*** %d failures.' % failed) return (tested - failed), failed def demo(): @@ -304,7 +304,7 @@ import optparse try: - import cPickle as pickle + import pickle as pickle except: import pickle @@ -342,12 +342,12 @@ num_test = int(m.group('test') or 0) options.numsents = (num_train, num_test) else: - raise ValueError, "malformed argument for option -n" + raise ValueError("malformed argument for option -n") else: options.numsents = (None, None) - except ValueError, v: - print 'error: %s' % v.message + except ValueError as v: + print('error: %s' % v.message) parser.print_help() if options.unittest: @@ -369,8 +369,8 @@ trainer = eval(options.trainer) if options.verbose: - print 'Training %s with %d sentences' % \ - (options.trainer, num_train) + print('Training %s with %d sentences' % \ + (options.trainer, num_train)) chunker = trainer(train, verbose=options.verbose) if options.model: @@ -388,12 +388,12 @@ stream.close() chunker = pickle.load(_open(options.model, 'r')) if options.verbose: - print 'Model saved as %s' % options.model - except Exception, e: - print "error: %s" % e + print('Model saved as %s' % options.model) + except Exception as e: + print("error: %s" % e) if test: if options.verbose: - print 'Testing %s on %d sentences' % \ - (options.trainer, num_test) + print('Testing %s on %d sentences' % \ + (options.trainer, num_test)) chunker.test(test, verbose=options.verbose) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/api.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/api.py --- ../python3/nltk_contrib/nltk_contrib/coref/api.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/api.py (refactored) @@ -27,7 +27,7 @@ """ def __init__(self): if self.__class__ == TrainableI: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") def train(self, labeled_sequence, test_sequence=None, unlabeled_sequence=None, **kwargs): @@ -54,7 +54,7 @@ # Inherit the superclass documentation. def __init__(self): if self.__class__ == HiddenMarkovModelChunkTaggerTransformI: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") def path2tags(self, path): """ @@ -78,7 +78,7 @@ """ def __init__(self): if self.__class__ == CorpusReaderDecorator: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") def reader(self): """ @@ -115,7 +115,7 @@ def __init__(self, s, **kwargs): if self.__class__ == NamedEntityI: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") self._iob_tag = kwargs.get('iob_tag', self.BEGINS) def iob_in(self): @@ -159,7 +159,7 @@ """ def __init__(self): if self.__class__ == ChunkTaggerI: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") @@ -172,7 +172,7 @@ """ def __init__(self): if self.__class__ == CorefResolverI: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") def mentions(self, sentences): """ @@ -255,7 +255,7 @@ class ChunkTaggerI(TaggerI, ChunkParserI): def __init__(self): if self.__class__ == ChunkTaggerI: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") def tag(self, sent): """ @@ -310,7 +310,7 @@ @type classifier_builder: C{function} """ if self.__class__ == AbstractClassifierBasedTagger: - raise AssertionError, "Interfaces can't be instantiated" + raise AssertionError("Interfaces can't be instantiated") ClassifierBasedTagger.__init__(self, feature_detector, labeled_sequence, classifier_builder) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/ace2.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/ace2.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/ace2.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk_contrib/nltk_contrib/coref/ace2.py ### RefactoringTool: Line 267: could not convert: raise 'Demo requires ACE-2 Corpus, set ACE2_DIR environment variable!' RefactoringTool: Python 3 does not support string exceptions --- ../python3/nltk_contrib/nltk_contrib/coref/ace2.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/ace2.py (refactored) @@ -243,17 +243,17 @@ try: reader = ACE2CorpusReader(root, file) - print 'Sentences for %s:' % (file) + print('Sentences for %s:' % (file)) for sent in reader.sents(): - print ' %s' % (sent) - print - print 'Words for %s:' % (file) + print(' %s' % (sent)) + print() + print('Words for %s:' % (file)) for word in reader.words(): - print ' %s' % (word) - print - except Exception, e: - print 'Error encountered while running demo for %s: %s' % (file, e) - print + print(' %s' % (word)) + print() + except Exception as e: + print('Error encountered while running demo for %s: %s' % (file, e)) + print() def demo(): """ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/coref/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/coref/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/coref/__init__.py --- ../python3/nltk_contrib/nltk_contrib/coref/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/coref/__init__.py (refactored) @@ -31,7 +31,7 @@ # Import top-level functionality into top-level namespace # Processing packages -- these all define __all__ carefully. -from api import * +from .api import * import nltk.data from nltk.corpus.util import LazyCorpusLoader @@ -39,6 +39,6 @@ if os.environ.get('NLTK_DATA_MUC6') \ and os.environ.get('NLTK_DATA_MUC6') not in nltk.data.path: nltk.data.path.insert(0, os.environ.get('NLTK_DATA_MUC6')) -from muc import MUCCorpusReader +from .muc import MUCCorpusReader muc6 = LazyCorpusLoader('muc6/', MUCCorpusReader, r'.*\.ne\..*\.sgm') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/concord.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/concord.py --- ../python3/nltk_contrib/nltk_contrib/concord.py (original) +++ ../python3/nltk_contrib/nltk_contrib/concord.py (refactored) @@ -225,16 +225,16 @@ reg = re.compile(middleRegexp) if verbose: - print "Matching the following target words:" + print("Matching the following target words:") wordLocs = [] # get list of (sentence, word) pairs to get context for - for item in self.index.getIndex().iteritems(): + for item in self.index.getIndex().items(): if reg.match("/".join([item[0][0].lower(), item[0][1]])): if verbose: - print "/".join(item[0]) + print("/".join(item[0])) wordLocs.append(item[1]) - print "" + print("") items = [] # if context lengths are specified in words: @@ -358,24 +358,24 @@ items.append((left, target, right, sentenceNum)) if verbose: - print "Found %d matches for target word..." % len(items) + print("Found %d matches for target word..." % len(items)) # sort the concordance if sort == self.SORT_WORD: if verbose: - print "Sorting by target word..." + print("Sorting by target word...") items.sort(key=lambda i:i[1][0].lower()) elif sort == self.SORT_POS: if verbose: - print "Sorting by target word POS tag..." + print("Sorting by target word POS tag...") items.sort(key=lambda i:i[1][1].lower()) elif sort == self.SORT_NUM: if verbose: - print "Sorting by sentence number..." + print("Sorting by sentence number...") items.sort(key=lambda i:i[3]) elif sort == self.SORT_RIGHT_CONTEXT: if verbose: - print "Sorting by first word of right context..." + print("Sorting by first word of right context...") items.sort(key=lambda i:i[2][0][0]) # if any regular expressions have been given for the context, filter @@ -390,11 +390,11 @@ rightRe=None if leftRegexp != None: if verbose: - print "Filtering on left context..." + print("Filtering on left context...") leftRe = re.compile(leftRegexp) if rightRegexp != None: if verbose: - print "Filtering on right context..." + print("Filtering on right context...") rightRe = re.compile(rightRegexp) for item in items: @@ -515,11 +515,11 @@ rPad = int(floor(max(maxMiddleLength - len(middle), 0) / 2.0)) middle = " "*lPad + middle + " "*rPad - print left + "| " + middle + " | " + right + " " + print(left + "| " + middle + " | " + right + " ") count += 1 if verbose: - print "\n" + repr(count) + " lines" + print("\n" + repr(count) + " lines") def _matches(self, item, leftRe, rightRe): """ Private method that runs the given regexps over a raw concordance @@ -798,10 +798,10 @@ x = 0 other = 0 total = 0 - print name - print "-"*(maxKeyLength + 7) + print(name) + print("-"*(maxKeyLength + 7)) # for each key: - for key in dist.keys(): + for key in list(dist.keys()): # keep track of how many samples shown, if using the showFirstX # option #if showFirstX > 0 and x >= showFirstX: @@ -823,7 +823,7 @@ if count < threshold or (showFirstX > 0 and x >= showFirstX): other += count else: - print key + " "*(maxKeyLength - len(key) + 1) + countString + print(key + " "*(maxKRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/concord.py eyLength - len(key) + 1) + countString) x += 1 if countOther: @@ -833,7 +833,7 @@ else: count = other countString = str(count) - print self._OTHER_TEXT + " "*(maxKeyLength - len(self._OTHER_TEXT) + 1) + countString + print(self._OTHER_TEXT + " "*(maxKeyLength - len(self._OTHER_TEXT) + 1) + countString) if showTotal: if normalise: count = 1.0 * total @@ -841,21 +841,21 @@ else: count = total countString = str(count) - print self._TOTAL_TEXT + " "*(maxKeyLength - len(self._TOTAL_TEXT) + 1) + countString - print "" + print(self._TOTAL_TEXT + " "*(maxKeyLength - len(self._TOTAL_TEXT) + 1) + countString) + print("") def demo(): """ Demonstrates how to use IndexConcordance and Aggregator. """ - print "Reading Brown Corpus into memory..." + print("Reading Brown Corpus into memory...") corpus = brown.tagged_sents('a') - print "Generating index..." + print("Generating index...") ic = IndexConcordance(corpus) - print "Showing all occurences of 'plasma' in the Brown Corpus..." + print("Showing all occurences of 'plasma' in the Brown Corpus...") ic.formatted(middleRegexp="^plasma/.*", verbose=True) - print "Investigating the collocates of 'deal' and derivatives..." + print("Investigating the collocates of 'deal' and derivatives...") agg = Aggregator() agg.add(ic.raw(middleRegexp="^deal", leftContextLength=1, rightContextLength=0, leftRegexp="^(\w|\s|/)*$"), "Brown Corpus 'deal' left collocates") + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/combined.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/combined.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/combined.py --- ../python3/nltk_contrib/nltk_contrib/combined.py (original) +++ ../python3/nltk_contrib/nltk_contrib/combined.py (refactored) @@ -96,7 +96,7 @@ self._brill = Brill(self._tagger[-1], []) self._brill.unmarshal(tagger_file) else: - print "error, tagger type not recognized." + print("error, tagger type not recognized.") def exemple_train (self, train_sents, verbose=False): self._append_default("N") @@ -124,8 +124,8 @@ ct.unmarshal("tresoldi") tokens = "Mauro viu o livro sobre a mesa".split() - print list(ct.tag(tokens)) + print(list(ct.tag(tokens))) # tests acc = tag.accuracy(ct, [train_sents]) - print 'Accuracy = %4.2f%%' % (100 * acc) + print('Accuracy = %4.2f%%' % (100 * acc)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classify/spearman.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classify/spearman.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classify/spearman.py --- ../python3/nltk_contrib/nltk_contrib/classify/spearman.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classify/spearman.py (refactored) @@ -162,7 +162,7 @@ result = classifier.get_class_dict("a") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: class a: 'a' = 1 @@ -190,7 +190,7 @@ result = classifier.get_class_dict("aaababb") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: class a: 'aa' = 1 @@ -224,7 +224,7 @@ result = classifier.get_class_dict("aaababb") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: @@ -268,9 +268,9 @@ result = classifier.get_class_probs(list(islice(genesis.raw("english-kjv"), 150, 200))) - print 'english-kjv :', result.prob('english-kjv') - print 'french :', result.prob('french') - print 'finnish :', result.prob('finnish') + print('english-kjv :', result.prob('english-kjv')) + print('french :', result.prob('french')) + print('finnish :', result.prob('finnish')) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classify/naivebayes.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classify/naivebayes.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classify/naivebayes.py --- ../python3/nltk_contrib/nltk_contrib/classify/naivebayes.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classify/naivebayes.py (refactored) @@ -82,7 +82,8 @@ self._cls_prob_dist = GoodTuringProbDist(cls_freq_dist, cls_freq_dist.B()) # for features - def make_probdist(freqdist, (cls, fname)): + def make_probdist(freqdist, xxx_todo_changeme): + (cls, fname) = xxx_todo_changeme return GoodTuringProbDist(freqdist, len(feature_values[fname])) self._feat_prob_dist = ConditionalProbDist(feat_freq_dist, make_probdist, True) @@ -149,7 +150,7 @@ result = classifier.get_class_dict("a") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: @@ -180,7 +181,7 @@ result = classifier.get_class_dict("aababb") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: class_probs a = 0.5 @@ -215,7 +216,7 @@ result = classifier.get_class_dict("aaababb") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: @@ -260,9 +261,9 @@ result = classifier.get_class_probs(list(islice(genesis.raw("english-kjv"), 150, 200))) - print 'english-kjv :', result.prob('english-kjv') - print 'french :', result.prob('french') - print 'finnish :', result.prob('finnish') + print('english-kjv :', result.prob('english-kjv')) + print('french :', result.prob('french')) + print('finnish :', result.prob('finnish')) if __name__ == '__main__': demo2() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classify/cosine.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classify/cosine.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classify/cosine.py --- ../python3/nltk_contrib/nltk_contrib/classify/cosine.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classify/cosine.py (refactored) @@ -144,7 +144,7 @@ result = classifier.get_class_dict("a") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: @@ -181,7 +181,7 @@ result = classifier.get_class_dict("aaababb") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: class a: 'aa' = 5 @@ -220,7 +220,7 @@ result = classifier.get_class_dict("aaababb") for cls in result: - print cls, ':', result[cls] + print(cls, ':', result[cls]) """ expected values: @@ -270,9 +270,9 @@ result = classifier.get_class_probs(list(islice(genesis.raw("english-kjv"), 150, 200))) - print 'english-kjv :', result.prob('english-kjv') - print 'french :', result.prob('french') - print 'finnish :', result.prob('finnish') + print('english-kjv :', result.prob('english-kjv')) + print('french :', result.prob('french')) + print('finnish :', result.prob('finnish')) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classify/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classify/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classify/__init__.py --- ../python3/nltk_contrib/nltk_contrib/classify/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classify/__init__.py (refactored) @@ -110,6 +110,6 @@ return float(correct) / len(gold) -from cosine import * -from naivebayes import * -from spearman import * +from .cosine import * +from .naivebayes import * +from .spearman import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/zerortests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/zerortests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/zerortests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/stats.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/convert.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/convert.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/convert.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/convert.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/convert.py (refactored) @@ -49,7 +49,7 @@ for line in f: words = line.split(sep) if not index < len(words): - print "Warning! omitting line " + str(line) + print("Warning! omitting line " + str(line)) continue values.add(words[index]) return ','.join(values) @@ -65,7 +65,7 @@ ind = path.rfind('.') if ind == -1: ind = len(path) nf = open(path[:ind] + 'conv' + path[ind:], 'w') - for l in converted:print >>nf, l + for l in converted:print(l, file=nf) nf.close() def convert_log_to_csv(path): @@ -73,7 +73,7 @@ csvf = open(path + '.csv', 'w') for each in classifications: - print >>csvf, each.algorithm + ',' + each.training + ',' + each.test + ',' + each.gold + ',' + each.accuracy + ',' + each.f_score + print(each.algorithm + ',' + each.training + ',' + each.test + ',' + each.gold + ',' + each.accuracy + ',' + each.f_score, file=csvf) def get_classification_log_entries(path): f = open(path) @@ -215,15 +215,15 @@ texf = open(path + '-acc.tex', 'w') for table in accuracy_tables: - print >>texf, table + print(table, file=texf) texf = open(path + '-fs.tex', 'w') for table in f_score_tables: - print >>texf, table + print(table, file=texf) texf = open(path + '-macc.tex', 'w') for table in mean_accuracy_tables: - print >>texf, table + print(table, file=texf) texf = open(path + '-mdatasets.tex', 'w') - print >>texf, mean_datasets + print(mean_datasets, file=texf) def get_stat_lists(cols): return dict([(each, util.StatList()) for each in cols]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/batchtest.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/batchtest.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/batchtest.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/batchtest.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/batchtest.py (refactored) @@ -15,7 +15,7 @@ print('in run') for dir_name, dirs, files in os.walk(root_path): data = set([]) - print('Dir name ' + str(dir_name) + ' dirs ' + str(dirs) + ' files ' + str(files)) + print(('Dir name ' + str(dir_name) + ' dirs ' + str(dirs) + ' files ' + str(files))) for file in files: index = file.rfind('.') if index != -1: @@ -65,7 +65,7 @@ for suffix in all: params = ['-a', algorithm, '-f', path + suffix, '-l', log_path, '-c', 5] - print "Params " + str(params) + print("Params " + str(params)) c.Classify().run(params) def to_str_array(value, times): @@ -91,13 +91,13 @@ resp = 0 while(resp != 1 and resp != 2): try: - resp = int(raw_input("Select one of following options:\n1. Run all tests\n2. Delete generated files\n")) + resp = int(input("Select one of following options:\n1. Run all tests\n2. Delete generated files\n")) except ValueError: pass if resp == 1: - dir_tree_path = raw_input("Enter directory tree path") - log_file = raw_input("Enter log file") + dir_tree_path = input("Enter directory tree path") + log_file = input("Enter log file") run(dir_tree_path, log_file) elif resp == 2: - dir_path = raw_input("Enter directory path") + dir_path = input("Enter directory path") delete_generated_files(dir_path) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/utilities/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/onertests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/onertests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/onertests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/numrangetests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/numrangetests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/numrangetests.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/numrangetests.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/numrangetests.py (refactored) @@ -57,14 +57,14 @@ def test_split_returns_none_when_lower_eq_upper(self): _range = r.Range() - self.assertEquals(None, _range.split(2)) + self.assertEqual(None, _range.split(2)) def test_split_returns_none_if_size_of_each_split_is_less_than_delta(self): try: _range = r.Range(0, 0.000005) _range.split(7) - except (se.SystemError), e: - self.assertEquals('Splitting of range resulted in elements smaller than delta 1e-06.', e.message) + except (se.SystemError) as e: + self.assertEqual('Splitting of range resulted in elements smaller than delta 1e-06.', e.message) def test_split_includes_the_highest_and_lowest(self): _range = r.Range() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/naivebayestests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/naivebayestests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/naivebayestests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/knntests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/knntests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/knntests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/itemtests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/itemtests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/itemtests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancetests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancetests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancetests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancestests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancestests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancestests.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancestests.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/instancestests.py (refactored) @@ -285,7 +285,7 @@ path = datasetsDir(self) + 'numerical' + SEP + 'person' _training = training(path) class_freq_dist = _training.class_freq_dist() - self.assertEqual(['yes','no'], class_freq_dist.keys()) + self.assertEqual(['yes','no'], list(class_freq_dist.keys())) def test_posterior_probablities_with_discrete_values(self): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/inittests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/inittests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/inittests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/formattests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/formattests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/formattests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/featureselecttests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/featureselecttests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/featureselecttests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/distancemetrictests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/distancemetrictests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/distancemetrictests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/discretisetests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/discretisetests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/discretisetests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/discretisedattributetests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/discretisedattributetests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/discretisedattributetests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisiontreetests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisiontreetests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisiontreetests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisionstumptests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisionstumptests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisionstumptests.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisionstumptests.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/decisionstumptests.py (refactored) @@ -112,7 +112,7 @@ values = ds.dictionary_of_values(phoney); self.assertEqual(3, len(values)) for i in ['a', 'b', 'c']: - self.assertTrue(values.has_key(i)) + self.assertTrue(i in values) self.assertEqual(0, values[i]) def test_gain_ratio(self): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/confusionmatrixtests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/confusionmatrixtests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/confusionmatrixtests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/commandlinetests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/commandlinetests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/commandlinetests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/classifytests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/classifytests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/classifytests.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/classifytests.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/classifytests.py (refactored) @@ -126,28 +126,28 @@ def test_get_file_strategy(self): strategy = c.get_file_strategy('files', None, None, None, True) self.assertEqual(c.CommonBaseNameStrategy, strategy.__class__) - values = strategy.values() + values = list(strategy.values()) self.assertEqual(values[0], 'files') self.assertEqual(values[1], None) self.assertEqual(values[2], 'files') strategy = c.get_file_strategy('files', None, None, None, False) self.assertEqual(c.CommonBaseNameStrategy, strategy.__class__) - values = strategy.values() + values = list(strategy.values()) self.assertEqual(values[0], 'files') self.assertEqual(values[1], 'files') self.assertEqual(values[2], None) strategy = c.get_file_strategy(None, 'train', 'test', None, False) self.assertEqual(c.ExplicitNamesStrategy, strategy.__class__) - values = strategy.values() + values = list(strategy.values()) self.assertEqual(values[0], 'train') self.assertEqual(values[1], 'test') self.assertEqual(values[2], None) strategy = c.get_file_strategy(None, 'train', None, 'gold', False) self.assertEqual(c.ExplicitNamesStrategy, strategy.__class__) - values = strategy.values() + values = list(strategy.values()) self.assertEqual(values[0], 'train') self.assertEqual(values[1], None) self.assertEqual(values[2], 'gold') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/cfiletests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/cfiletests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/cfiletests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/autoclasstests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/autoclasstests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/autoclasstests.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/autoclasstests.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/autoclasstests.py (refactored) @@ -25,9 +25,9 @@ def test_next(self): a = autoclass.FIRST - b = a.next() + b = next(a) self.assertEqual('b', str(b)) - self.assertEqual('c', str(b.next())) + self.assertEqual('c', str(next(b))) self.assertEqual('z', self.next('y')) self.assertEqual('ba', self.next('z')) self.assertEqual('bb', self.next('ba')) @@ -36,4 +36,4 @@ self.assertEqual('baa', self.next('zz')) def next(self, current): - return str(autoclass.AutoClass(current).next()) + return str(next(autoclass.AutoClass(current))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/attributetests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/attributetests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/attributetests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/attributestests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/attributestests.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/attributestests.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/alltests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier_tests/alltests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/alltests.py --- ../python3/nltk_contrib/nltk_contrib/classifier_tests/alltests.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier_tests/alltests.py (refactored) @@ -13,10 +13,10 @@ for dn,d,f in os.walk('.'): if dn is not '.': continue testfilenames = [filename for filename in f if re.search('tests\.py$', filename) is not None] - modulenames = map(lambda f: re.sub('\.py$', '', f), testfilenames) - modules = map(__import__, modulenames) + modulenames = [re.sub('\.py$', '', f) for f in testfilenames] + modules = list(map(__import__, modulenames)) load = unittest.defaultTestLoader.loadTestsFromModule - return unittest.TestSuite(map(load, modules)) + return unittest.TestSuite(list(map(load, modules))) if __name__ == '__main__': runner = unittest.TextTestRunner() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier_tests/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier_tests/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier_tests/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/zeror.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/zeror.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/zeror.py --- ../python3/nltk_contrib/nltk_contrib/classifier/zeror.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/zeror.py (refactored) @@ -36,7 +36,7 @@ def __max(self): max, klass_value = 0, None - for key in self.__klassCount.keys(): + for key in list(self.__klassCount.keys()): value = self.__klassCount[key] if value > max: max = value + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/util.py --- ../python3/nltk_contrib/nltk_contrib/classifier/util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/util.py (refactored) @@ -3,11 +3,11 @@ # # URL: # This software is distributed under GPL, for license information see LICENSE.TXT -import UserList, math +import collections, math -class StatList(UserList.UserList): +class StatList(collections.UserList): def __init__(self, values=None): - UserList.UserList.__init__(self, values) + collections.UserList.__init__(self, values) def mean(self): if len(self.data) == 0: return 0 + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/oner.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/oner.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/oner.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/numrange.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/numrange.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/numrange.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/naivebayes.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/naivebayes.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/naivebayes.py --- ../python3/nltk_contrib/nltk_contrib/classifier/naivebayes.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/naivebayes.py (refactored) @@ -30,7 +30,7 @@ for klass_value in self.klass: class_conditional_probability = self.class_conditional_probability(instance, klass_value) estimates_using_prob[class_conditional_probability] = klass_value - keys = estimates_using_prob.keys() + keys = list(estimates_using_prob.keys()) keys.sort()#find the one with max conditional prob return estimates_using_prob[keys[-1]] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/knn.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/knn.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/knn.py --- ../python3/nltk_contrib/nltk_contrib/classifier/knn.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/knn.py (refactored) @@ -41,7 +41,7 @@ self.distances[value] = [instance] def minimum_distance_instances(self): - keys = self.distances.keys() + keys = list(self.distances.keys()) keys.sort() return self.distances[keys[0]] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/item.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/item.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/item.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/instances.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/instances.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/instances.py --- ../python3/nltk_contrib/nltk_contrib/classifier/instances.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/instances.py (refactored) @@ -9,11 +9,11 @@ from nltk_contrib.classifier import instance as ins, item, cfile, confusionmatrix as cm, numrange as r, util from nltk_contrib.classifier.exceptions import systemerror as system, invaliddataerror as inv from nltk import probability as prob -import operator, UserList, UserDict, math - -class Instances(UserList.UserList): - def __init__(self, instances): - UserList.UserList.__init__(self, instances) +import operator, collections, UserDict, math + +class Instances(collections.UserList): + def __init__(self, instances): + collections.UserList.__init__(self, instances) def are_valid(self, klass, attributes): for instance in self.data: @@ -122,7 +122,7 @@ for klass_value in klass_values: freq_dists[attribute][value].inc(klass_value) #Laplacian smoothing stat_list_values = {} - cont_attrs = filter(lambda attr: attr.is_continuous(), attributes) + cont_attrs = [attr for attr in attributes if attr.is_continuous()] if attributes.has_continuous(): for attribute in cont_attrs: stat_list_values[attribute] = {} @@ -160,12 +160,12 @@ matrix.count(i.klass_value, i.classified_klass) return matrix -class SupervisedBreakpoints(UserList.UserList): +class SupervisedBreakpoints(collections.UserList): """ Used to find breakpoints for discretisation """ def __init__(self, klass_values, attr_values): - UserList.UserList.__init__(self, []) + collections.UserList.__init__(self, []) self.attr_values = attr_values self.klass_values = klass_values + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/instance.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/instance.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/instance.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk_contrib/nltk_contrib/classifier/instance.py ### RefactoringTool: Line 13: absolute and local imports together + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/format.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/format.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/format.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/featureselect.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/featureselect.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/featureselect.py --- ../python3/nltk_contrib/nltk_contrib/classifier/featureselect.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/featureselect.py (refactored) @@ -58,7 +58,7 @@ class FeatureSelect(cl.CommandLineInterface): def __init__(self): - cl.CommandLineInterface.__init__(self, ALGORITHM_MAPPINGS.keys(), RANK, a_help, f_help, t_help, T_help, g_help, o_help) + cl.CommandLineInterface.__init__(self, list(ALGORITHM_MAPPINGS.keys()), RANK, a_help, f_help, t_help, T_help, g_help, o_help) def execute(self): cl.CommandLineInterface.execute(self) @@ -221,7 +221,7 @@ try: float(stringval) return True - except (ValueError, TypeError), e: return False + except (ValueError, TypeError) as e: return False def batch_filter_select(base_path, suffixes, number_of_attributes, log_path, has_continuous): filter_suffixes = [] @@ -229,7 +229,7 @@ for selection_criteria in [INFORMATION_GAIN, GAIN_RATIO]: feat_sel = FeatureSelect() params = ['-a', RANK, '-f', base_path + each, '-o', selection_criteria + ',' + str(number_of_attributes), '-l', log_path] - print "Params " + str(params) + print("Params " + str(params)) feat_sel.run(params) filter_suffixes.append(each + feat_sel.get_suffix()) return filter_suffixes @@ -240,7 +240,7 @@ for alg in [FORWARD_SELECTION, BACKWARD_ELIMINATION]: feat_sel = FeatureSelect() params = ['-a', alg, '-f', base_path + each, '-o', classifier + ',' + str(fold) + ',' + str(delta), '-l', log_path] - print "Params " + str(params) + print("Params " + str(params)) feat_sel.run(params) wrapper_suffixes.append(each + feat_sel.get_suffix()) return wrapper_suffixes + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/exceptions/systemerror.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/exceptions/invaliddataerror.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/exceptions/illegalstateerror.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/exceptions/filenotfounderror.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/exceptions/filenotfounderror.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/exceptions/filenotfounderror.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/exceptions/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/distancemetric.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/distancemetric.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/distancemetric.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/discretisedattribute.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/discretisedattribute.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/discretisedattribute.py --- ../python3/nltk_contrib/nltk_contrib/classifier/discretisedattribute.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/discretisedattribute.py (refactored) @@ -14,7 +14,7 @@ self.values, klass_value = [], autoclass.FIRST for i in range(len(ranges)): self.values.append(klass_value.name) - klass_value = klass_value.next() + klass_value = next(klass_value) self.index = index self.type = attribute.DISCRETE self.ranges = ranges + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/discretise.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/discretise.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/discretise.py --- ../python3/nltk_contrib/nltk_contrib/classifier/discretise.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/discretise.py (refactored) @@ -52,7 +52,7 @@ class Discretise(cl.CommandLineInterface): def __init__(self): - cl.CommandLineInterface.__init__(self, ALGORITHM_MAPPINGS.keys(), UNSUPERVISED_EQUAL_WIDTH, a_help, f_help, t_help, T_help, g_help, o_help) + cl.CommandLineInterface.__init__(self, list(ALGORITHM_MAPPINGS.keys()), UNSUPERVISED_EQUAL_WIDTH, a_help, f_help, t_help, T_help, g_help, o_help) self.add_option("-A", "--attributes", dest="attributes", type="string", help=A_help) def execute(self): @@ -185,7 +185,7 @@ params.extend(['-o', options]) if log_path is not None: params.extend(['-l', log_path]) - print "Params " + str(params) + print("Params " + str(params)) disc.run(params) return disc.get_suffix() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/decisiontree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/decisiontree.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/decisiontree.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/decisionstump.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/decisionstump.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/decisionstump.py --- ../python3/nltk_contrib/nltk_contrib/classifier/decisionstump.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/decisionstump.py (refactored) @@ -37,10 +37,10 @@ self.root[instance.klass_value] += 1 def error(self): - count_for_each_attr_value = self.counts.values() + count_for_each_attr_value = list(self.counts.values()) total, errors = 0, 0 for class_count in count_for_each_attr_value: - subtotal, counts = 0, class_count.values() + subtotal, counts = 0, list(class_count.values()) counts.sort() for count in counts: subtotal += count errors += (subtotal - counts[-1]) @@ -56,7 +56,7 @@ def majority_klass(self, attr_value): klass_values_with_count = self.counts[attr_value] _max, klass_value = 0, self.safe_default() # will consider safe default because at times the test will have an attribute value not present in the stump(can happen in cross validation as well) - for klass, count in klass_values_with_count.items(): + for klass, count in list(klass_values_with_count.items()): if count > _max: _max, klass_value = count, klass return klass_value @@ -67,7 +67,7 @@ """ if self.__safe_default == None: max_occurance, klass = -1, None - for klass_element in self.root.keys(): + for klass_element in list(self.root.keys()): if self.root[klass_element] > max_occurance: max_occurance = self.root[klass_element] klass = klass_element @@ -110,14 +110,14 @@ def __str__(self): _str = 'Decision stump for attribute ' + self.attribute.name - for key, value in self.counts.items(): + for key, value in list(self.counts.items()): _str += '\nAttr value: ' + key + '; counts: ' + value.__str__() for child in self.children: _str += child.__str__() return _str def total_counts(dictionary_of_klass_freq): - return sum([count for count in dictionary_of_klass_freq.values()]) + return sum([count for count in list(dictionary_of_klass_freq.values())]) def dictionary_of_values(klass): return dict([(value, 0) for value in klass]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/confusionmatrix.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/confusionmatrix.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/confusionmatrix.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/commandline.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/commandline.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/commandline.py --- ../python3/nltk_contrib/nltk_contrib/classifier/commandline.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/commandline.py (refactored) @@ -38,7 +38,7 @@ self.add_option("-T", "--test-file", dest=TEST, type="string", help=T_help) self.add_option("-g", "--gold-file", dest=GOLD, type="string", help=g_help) - self.add_option("-D", "--data-format", dest=DATA_FORMAT, type="choice", choices=DATA_FORMAT_MAPPINGS.keys(), \ + self.add_option("-D", "--data-format", dest=DATA_FORMAT, type="choice", choices=list(DATA_FORMAT_MAPPINGS.keys()), \ default=C45_FORMAT, help=D_help) self.add_option("-l", "--log-file", dest=LOG_FILE, type="string", help=l_help) self.add_option("-o", "--options", dest=OPTIONS, type="string", help=o_help) @@ -67,8 +67,8 @@ self.log = None if log_file is not None: self.log = open(log_file, 'a') - print >>self.log, '-' * 40 - print >>self.log, 'DateTime: ' + time.strftime('%c', time.localtime()) + print('-' * 40, file=self.log) + print('DateTime: ' + time.strftime('%c', time.localtime()), file=self.log) def run(self, args): """ @@ -117,22 +117,22 @@ def log_common_params(self, name): if self.log is not None: - print >>self.log, 'Operation: ' + name - print >>self.log, '\nAlgorithm: ' + str(self.algorithm) + '\nTraining: ' + str(self.training_path) + \ - '\nTest: ' + str(self.test_path) + '\nGold: ' + str(self.gold_path) + '\nOptions: ' + str(self.options) + print('Operation: ' + name, file=self.log) + print('\nAlgorithm: ' + str(self.algorithm) + '\nTraining: ' + str(self.training_path) + \ + '\nTest: ' + str(self.test_path) + '\nGold: ' + str(self.gold_path) + '\nOptions: ' + str(self.options), file=self.log) def log_created_files(self, files_names, message): if self.log is None: - print message + print(message) else: - print >>self.log, "NumberOfFilesCreated: " + str(len(files_names)) + print("NumberOfFilesCreated: " + str(len(files_names)), file=self.log) count = 0 for file_name in files_names: if self.log is None: - print file_name + print(file_name) else: - print >>self.log, "CreatedFile" + str(count) + ": " + file_name + print("CreatedFile" + str(count) + ": " + file_name, file=self.log) count += 1 + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/classify.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/classify.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/classify.py --- ../python3/nltk_contrib/nltk_contrib/classifier/classify.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/classify.py (refactored) @@ -67,7 +67,7 @@ IB1 = 'IB1' ALGORITHM_MAPPINGS = {ZERO_R:zeror.ZeroR, ONE_R:oner.OneR, DECISION_TREE:decisiontree.DecisionTree, NAIVE_BAYES:naivebayes.NaiveBayes, IB1:knn.IB1} -ALL_ALGORITHMS = ALGORITHM_MAPPINGS.keys() +ALL_ALGORITHMS = list(ALGORITHM_MAPPINGS.keys()) VERIFY='verify' ACCURACY='accuracy' @@ -80,7 +80,7 @@ class Classify(cl.CommandLineInterface): def __init__(self): - cl.CommandLineInterface.__init__(self, ALGORITHM_MAPPINGS.keys(), ONE_R, a_help, f_help, t_help, T_help, g_help, o_help) + cl.CommandLineInterface.__init__(self, list(ALGORITHM_MAPPINGS.keys()), ONE_R, a_help, f_help, t_help, T_help, g_help, o_help) self.add_option("-v", "--verify", dest=VERIFY, action="store_true", default=False, help=v_help) self.add_option("-A", "--accuracy", dest=ACCURACY, action="store_false", default=True, help=A_help) self.add_option("-e", "--error", dest=ERROR, action="store_true", default=False, help=e_help) @@ -103,7 +103,7 @@ self.error('Invalid arguments. Cannot verify classification for test data.') file_strategy = get_file_strategy(self.files, self.training_path, self.test_path, self.gold_path, self.get_value(VERIFY)) - self.training_path, self.test_path, self.gold_path = file_strategy.values() + self.training_path, self.test_path, self.gold_path = list(file_strategy.values()) training, attributes, klass, test, gold = self.get_instances(self.training_path, self.test_path, self.gold_path, cross_validation_fold is not None) classifier = ALGORITHM_MAPPINGS[self.algorithm](training, attributes, klass) @@ -165,14 +165,14 @@ total = 0 for each in self.confusion_matrices: total += getattr(each, attribute)() - print >>log, str_repn + ': ' + str(float(total)/len(self.confusion_matrices)) + print(str_repn + ': ' + str(float(total)/len(self.confusion_matrices)), file=log) def write(self, log, should_write, data_format, suffix): if should_write: for index in range(len(self.gold_instances)): new_path = self.training_path + str(index + 1) + suffix data_format.write_gold(self.gold_instances[index], new_path) - print >>log, 'Gold classification written to ' + new_path + ' file.' + print('Gold classification written to ' + new_path + ' file.', file=log) def train(self): #do Nothing @@ -198,7 +198,7 @@ Will always write in the case of test files """ data_format.write_test(self.test, self.test_path + suffix) - print >>log, 'Test classification written to ' + self.test_path + suffix + ' file.' + print('Test classification written to ' + self.test_path + suffix + ' file.', file=log) def train(self): self.classifier.train() @@ -223,12 +223,12 @@ def __print_value(self, log, is_true, attribute, str_repn): if is_true: - print >>log, str_repn + ': ' + getattr(self.confusion_matrix, attribute)().__str__() + print(str_repn + ': ' + getattr(self.confusion_matrix, attribute)().__str__(), file=log) def write(self, log, should_write, data_format, suffix): if should_write: data_format.write_gold(self.gold, self.gold_path + suffix) - print >>log, 'Gold classification written to ' + self.gold_path + suffix + ' file.' + print('Gold classification written to ' + self.gold_path + suffix + ' file.', file=log) def train(self): self.classifier.train() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/cfile.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/classifier/cfile.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/cfile.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/basicimports.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/autoclass.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/autoclass.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/autoclass.py --- ../python3/nltk_contrib/nltk_contrib/classifier/autoclass.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/autoclass.py (refactored) @@ -10,7 +10,7 @@ def __init__(self, name): self.name = name - def next(self): + def __next__(self): base26 = self.base26() base26 += 1 return AutoClass(string(base26)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/attribute.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/attribute.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/attribute.py --- ../python3/nltk_contrib/nltk_contrib/classifier/attribute.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/attribute.py (refactored) @@ -8,7 +8,7 @@ from nltk_contrib.classifier.exceptions import systemerror as se from nltk_contrib.classifier import autoclass as ac, cfile, decisionstump as ds from nltk import probability as prob -import UserList +import collections CONTINUOUS = 'continuous' DISCRETE = 'discrete' @@ -58,7 +58,7 @@ def __hash__(self): return hash(self.name) + hash(self.index) -class Attributes(UserList.UserList): +class Attributes(collections.UserList): def __init__(self, attributes = []): self.data = attributes @@ -84,7 +84,7 @@ self.data[disc_attr.index] = disc_attr def empty_decision_stumps(self, ignore_attributes, klass): - filtered = filter(lambda attribute: attribute not in ignore_attributes, self.data) + filtered = [attribute for attribute in self.data if attribute not in ignore_attributes] return [ds.DecisionStump(attribute, klass) for attribute in filtered] def remove_attributes(self, attributes): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/classifier/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/classifier/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/classifier/__init__.py --- ../python3/nltk_contrib/nltk_contrib/classifier/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/classifier/__init__.py (refactored) @@ -19,7 +19,7 @@ self.attributes = attributes self.training = training self.convert_continuous_values_to_numbers(self.training) - sorted_klass_freqs = self.training.class_freq_dist().keys() + sorted_klass_freqs = list(self.training.class_freq_dist().keys()) sorted_klass_values = [each for each in sorted_klass_freqs] sorted_klass_values.extend([each for each in klass if not sorted_klass_values.__contains__(each)]) self.klass = sorted_klass_values @@ -86,7 +86,7 @@ def entropy_of_key_counts(dictionary): freq_dist = prob.FreqDist() - klasses = dictionary.keys() + klasses = list(dictionary.keys()) for klass in klasses: freq_dist.inc(klass, dictionary[klass]) return entropy_of_freq_dist(freq_dist) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/bioreader/bioreader.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/bioreader/bioreader.py --- ../python3/nltk_contrib/nltk_contrib/bioreader/bioreader.py (original) +++ ../python3/nltk_contrib/nltk_contrib/bioreader/bioreader.py (refactored) @@ -283,18 +283,18 @@ elif format.lower() == "pubmed": self.rerecord = re.compile(r'\'r'(?P.+?)'r'\',re.DOTALL) else: - print "Unrecognized format" + print("Unrecognized format") self.RecordsList = re.findall(self.rerecord,whole) whole = "" self.RecordsList = [""+x.rstrip()+"" for x in self.RecordsList] self.dictRecords = self.Createdict() self.RecordsList = [] - self.howmany = len(self.dictRecords.keys()) - self.keys = self.dictRecords.keys() + self.howmany = len(list(self.dictRecords.keys())) + self.keys = list(self.dictRecords.keys()) tfinal = time.time() self.repository = None - print "finished loading at ",time.ctime(tfinal) - print "loaded in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes" + print("finished loading at ",time.ctime(tfinal)) + print("loaded in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes") def __repr__(self): return "" @@ -355,7 +355,7 @@ tinicial = time.time() resultlist = [] if where: - for cadapmid in self.dictRecords.keys(): + for cadapmid in list(self.dictRecords.keys()): d = self.Read(cadapmid) if where == 'title': tosearch = d.title @@ -374,7 +374,7 @@ if self.repository: pass else: - print "No full text repository has been defined...." + print("No full text repository has been defined....") return None elif where == 'pmid': tosearch = d.pmid @@ -385,16 +385,16 @@ pass if len(resultlist)!= 0: tfinal = time.time() - print "Searched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes" - print "Found a total of ",str(len(resultlist))," hits for your query, in the ",where," field" + print("Searched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes") + print("Found a total of ",str(len(resultlist))," hits for your query, in the ",where," field") return resultlist else: - print "Searched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes" - print "Query not found" + print("Searched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes") + print("Query not found") return None else: tosearch = '' - for cadapmid in self.dictRecords.keys(): + for cadapmid in list(self.dictRecords.keys()): tosearch = self.dictRecords[cadapmid] hit = re.search(cadena,tosearch) if hit: @@ -403,13 +403,13 @@ pass if len(resultlist)!= 0: tfinal = time.time() - print "Searched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes" - print "Found a total of ",str(len(resultlist))," hits for your query, in all fields" + print("Searched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes") + print("Found a total of ",str(len(resultlist))," hits for your query, in all fields") return resultlist else: tfinal = time.time() - print "Searched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes" - print "Query not found" + print("SearRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/bioreader/bioreader.py ched in", tfinal-tinicial," seconds, or",((tfinal-tinicial)/60)," minutes") + print("Query not found") return None @@ -432,15 +432,15 @@ """ def __init__(self): #global urllib,time,string,random - import urllib,time,string,random + import urllib.request, urllib.parse, urllib.error,time,string,random def getXml(self,s): - pedir = urllib.urlopen("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id="+s+"&retmode=xml") + pedir = urllib.request.urlopen("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id="+s+"&retmode=xml") stringxml = pedir.read() self.salida.write(stringxml[:-20]+"\n") def getXmlString(self,s): - pedir = urllib.urlopen("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id="+s+"&retmode=xml") + pedir = urllib.request.urlopen("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id="+s+"&retmode=xml") stringxml = pedir.read() return stringxml[:-20]+"\n" @@ -463,7 +463,7 @@ cientos = self.listafin[:100] - print "new length self.listacorr", len(self.listafin) + print("new length self.listacorr", len(self.listafin)) if len(self.listafin) <= 0: break else: @@ -471,7 +471,7 @@ nueva = self.listastring(cientos) self.getXml(nueva) for c in cientos: - print c + print(c) self.listafin.remove(c) self.salida.close() @@ -489,7 +489,7 @@ cientos = self.listafin[:100] - print "new length self.listacorr", len(self.listafin) + print("new length self.listacorr", len(self.listafin)) if len(self.listafin) <= 0: break else: @@ -498,6 +498,6 @@ newX = self.getXmlString(nueva) self.AllXML = self.AllXML + newX for c in cientos: - print c + print(c) self.listafin.remove(c) return self.AllXML + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/bioreader/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/bioreader/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/bioreader/__init__.py --- ../python3/nltk_contrib/nltk_contrib/bioreader/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/bioreader/__init__.py (refactored) @@ -7,7 +7,7 @@ # For license information, see LICENSE.TXT # -from bioreader import * +from .bioreader import * __all__ = [ 'Reader', + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/test.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/test.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/test.py --- ../python3/nltk_contrib/nltk_contrib/align/test.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/test.py (refactored) @@ -1,7 +1,7 @@ -import align_util -import align -import distance_measures +from . import align_util +from . import align +from . import distance_measures import sys @@ -56,7 +56,7 @@ gc_alignment = gc.batch_align(regions1, regions2) - print "Alignment0: %s" % gc_alignment + print("Alignment0: %s" % gc_alignment) #demo_eval(gc_alignment, gold_file) @@ -78,7 +78,7 @@ gc_alignment = gc.batch_align(regions1, regions2) - print "Alignment1: %s" % gc_alignment + print("Alignment1: %s" % gc_alignment) #demo_eval(gc_alignment, gold_file) @@ -97,7 +97,7 @@ standard_alignment2 = std.batch_align(s2, t2) - print "Alignment2: %s" % standard_alignment2 + print("Alignment2: %s" % standard_alignment2) # demo 4 @@ -109,14 +109,14 @@ standard_alignment3 = std.align(s3, t3) - print "Alignment3: %s" % standard_alignment3 + print("Alignment3: %s" % standard_alignment3) # demo 5 top_down_alignments = std.recursive_align(s3, t3, []) for alignment in top_down_alignments: - print "Top down align: %s" % alignment + print("Top down align: %s" % alignment) def madame_bovary_test(source_file, target_file, source_pickle_file, target_pickle_file): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/gale_church.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/gale_church.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/gale_church.py --- ../python3/nltk_contrib/nltk_contrib/align/gale_church.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/gale_church.py (refactored) @@ -7,10 +7,10 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import division + import math -from util import * +from .util import * # Based on Gale & Church 1993, # "A Program for Aligning Sentences in Bilingual Corpora" @@ -182,10 +182,10 @@ v = first while v != split_value: yield v - v = it.next() + v = next(it) while True: - yield _chunk_iterator(it.next()) + yield _chunk_iterator(next(it)) def parse_token_stream(stream, soft_delimiter, hard_delimiter): @@ -205,4 +205,4 @@ with nested(open(sys.argv[1], "r"), open(sys.argv[2], "r")) as (s, t): source = parse_token_stream((l.strip() for l in s), ".EOS", ".EOP") target = parse_token_stream((l.strip() for l in t), ".EOS", ".EOP") - print align_texts(source, target) + print(align_texts(source, target)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/distance_measures.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk_contrib/nltk_contrib/align/distance_measures.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/distance_measures.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/api.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/api.py --- ../python3/nltk_contrib/nltk_contrib/align/api.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/api.py (refactored) @@ -7,7 +7,7 @@ """ from nltk.internals import deprecated, overridden -from itertools import izip + ##////////////////////////////////////////////////////// # Alignment Interfaces @@ -53,7 +53,7 @@ @rtype: C{list} of I{alignments} """ - return [self.align(st, tt) for (st, tt) in izip(source, target)] + return [self.align(st, tt) for (st, tt) in zip(source, target)] def recursive_align(self, source, target, alignments): """ @@ -70,7 +70,7 @@ if (self.output_format == 'text_tuples'): alignment_mapping = standard_alignment - import align_util + from . import align_util if (self.output_format == 'bead_objects'): (alignment_mapping, alignment_mapping_indices) = align_util.convert_bead_to_tuples(standard_alignment, source, target) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/alignment_util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/alignment_util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/alignment_util.py --- ../python3/nltk_contrib/nltk_contrib/align/alignment_util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/alignment_util.py (refactored) @@ -41,9 +41,9 @@ """ test_values = [] - for hard_regions_index in alignments.keys(): + for hard_regions_index in list(alignments.keys()): soft_regions_list = [] - for soft_regions_index in alignments[hard_regions_index].keys(): + for soft_regions_index in list(alignments[hard_regions_index].keys()): soft_regions_list.extend(alignments[hard_regions_index][soft_regions_index].alignment_mappings) soft_regions_list.reverse() test_values.extend(soft_regions_list) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/align_util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/align_util.py --- ../python3/nltk_contrib/nltk_contrib/align/align_util.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/align_util.py (refactored) @@ -78,132 +78,132 @@ def print_alignment_text_mapping(alignment_mapping): entry_num = 0 for entry in alignment_mapping: - print "--------------------------------" - print "Entry: %d" % entry_num + print("--------------------------------") + print("Entry: %d" % entry_num) entry_num = entry_num + 1 - print "%s" % str(entry[0]) - print "%s" % str(entry[1]) + print("%s" % str(entry[0])) + print("%s" % str(entry[1])) def print_alignment_index_mapping(alignment_mapping_indices): entry_num = 0 for entry in alignment_mapping_indices: - print "--------------------------------" - print "Indices Entry: %d" % entry_num + print("--------------------------------") + print("Indices Entry: %d" % entry_num) entry_num = entry_num + 1 source = entry[0] target = entry[1] - print "%s" % str(source) - print "%s" % str(target) + print("%s" % str(source)) + print("%s" % str(target)) def print_alignments(alignments, hard_region1, hard_region2): hard1_key = 0 hard2_key = 0 - for soft_key in alignments.keys(): + for soft_key in list(alignments.keys()): alignment = alignments[soft_key] if (alignment.category == '1 - 1'): - print "1-1: %s" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region2[hard2_key] - print "--------------------------" + print("1-1: %s" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region2[hard2_key]) + print("--------------------------") hard1_key = hard1_key + 1 hard2_key = hard2_key + 1 elif (alignment.category == '1 - 0'): - print "1-0: %s" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "--------------------------" + print("1-0: %s" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("--------------------------") hard1_key = hard1_key + 1 elif (alignment.category == '0 - 1'): - print "0-1: %s" % alignment.d - print "--------------------------" - print "%s" % hard_region2[hard2_key] - print "--------------------------" + print("0-1: %s" % alignment.d) + print("--------------------------") + print("%s" % hard_region2[hard2_key]) + print("--------------------------") hard2_key = hard2_key + 1 elif (alignment.category == '2 - 1'): - print "2-1: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region1[hard1_key + 1] - print "%s" % hard_region2[hard2_key] - print "--------------------------" + print("2-1: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region1[hard1_key + 1]) + print("%s" % hard_region2[hard2_key]) + print("--------------------------") hard1_key = hard1_key + 2 hard2_key = hard2_key + 1 elif (alignment.category == '1 - 2'): - print "1-2: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region2[hard2_key] - print "%s" % hard_region2[hard2_key + 1] - print "--------------------------" + print("1-2: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region2[hard2_key]) + print("%s" % hard_region2[hard2_key + 1]) + print("--------------------------") hard1_key = hard1_key + 1 hard2_key = hard2_key + 2 elif (alignment.category == '2 - 2'): - print "2-2: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region1[hard1_key + 1] - print "%s" % hard_region2[hard2_key] - print "%s" % hard_region2[hard2_key + 1] - print "--------------------------" + print("2-2: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region1[hard1_key + 1]) + print("%s" % hard_region2[hard2_key]) + print("%s" % hard_region2[hard2_key + 1]) + print("--------------------------") hard1_key = hard1_key + 2 hard2_key = hard2_key + 2 elif (alignment.category == '3 - 1'): - print "3-1: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region1[hard1_key + 1] - print "%s" % hard_region1[hard1_key + 2] - print "%s" % hard_region2[hard2_key] - print "--------------------------" + print("3-1: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region1[hard1_key + 1]) + print("%s" % hard_region1[hard1_key + 2]) + print("%s" % hard_region2[hard2_key]) + print("--------------------------") hard1_key = hard1_key + 3 hard2_key = hard2_key + 1 elif (alignment.category == '3 - 2'): - print "3-2: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region1[hard1_key + 1] - print "%s" % hard_region1[hard1_key + 2] - print "%s" % hard_region2[hard2_key] - print "%s" % hard_region2[hard2_key + 1] - print "--------------------------" + print("3-2: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region1[hard1_key + 1]) + print("%s" % hard_region1[hard1_key + 2]) + print("%s" % hard_region2[hard2_key]) + print("%s" % hard_region2[hard2_key + 1]) + print("--------------------------") hard1_key = hard1_key + 3 hard2_key = hard2_key + 2 elif (alignment.category == '1 - 3'): - print "1-3: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region2[hard2_key] - print "%s" % hard_region2[hard2_key + 1] - print "%s" % hard_region2[hard2_key + 2] - print "--------------------------" + print("1-3: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region2[hard2_key]) + print("%s" % hard_region2[hard2_key + 1]) + print("%s" % hard_region2[hard2_key + 2]) + print("--------------------------") hard1_key = hard1_key + 1 hard2_key = hard2_key + 3 elif (alignment.category == '2 - 3'): - print "2-3: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region1[hard1_key + 1] - print "%s" % hard_region2[hard2_key] - print "%s" % hard_region2[hard2_key + 1] - print "%s" % hard_region2[hard2_key + 2] - print "--------------------------" + print("2-3: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region1[hard1_key + 1]) + print("%s" % hard_region2[hard2_key]) + print("%s" % hard_region2[hard2_key + 1]) + print("%s" % hard_region2[hard2_key + 2]) + print("--------------------------") hard1_key = hard1_key + 2 hard2_key = hard2_key + 3 elif (alignment.category == '3 - 3'): - print "3-3: %.2f" % alignment.d - print "--------------------------" - print "%s" % hard_region1[hard1_key] - print "%s" % hard_region1[hard1_key + 1] - print "%s" % hard_region1[hard1_key + 2] - print "%s" % hard_region2[hard2_key] - print "%s" % hard_region2[hard2_key + 1] - print "%s" % hard_region2[hard2_key + 2] - print "--------------------------" + print("3-3: %.2f" % alignment.d) + print("--------------------------") + print("%s" % hard_region1[hard1_key]) + print("%s" % hard_region1[hard1_key + 1]) + print("%s" % hard_region1[hard1_key + 2]) + print("%s" % hard_region2[hard2_key]) + print("%s" % hard_region2[hard2_key + 1]) + print("%s" % hard_region2[hard2_key + 2]) + print("--------------------------") hard1_key = hard1_key + 3 hard2_key = hard2_key + 3 else: - print "not supported alignment type" + print("not supported alignment type") def list_to_str(input_list): return input_list @@ -214,7 +214,7 @@ alignment_mapping_indices = [] hard1_key = 0 hard2_key = 0 - for soft_key in alignments.keys(): + for soft_key in list(alignments.keys()): alignment = alignments[soft_key] if (alignment.category == '1 - 1'): align_tuple = (list_to_str(hard_region1[hard1_key]), list_to_str(hard_region2[hard2_key])) @@ -311,7 +311,7 @@ hard1_key = hard1_key + 3 hard2_key = hard2_key + 3 else: - print "not supported alignment type" + print("not supported alignment type") return (alignment_mapping, alignment_mapping_indices) @@ -320,7 +320,7 @@ hard_key = 0 for hard_list in alignments: for alignment_dict in hard_list: - for align_key in alignment_dict.keys(): + for align_key in list(alignment_dict.keys()): alignment = alignment_dict[align_key] if (alignment.category == '1 - 1'): @@ -366,15 +366,15 @@ align_key) alignment_mappings.append(align_triple) else: - print "not supported alignment type" + print("not supported alignment type") return alignment_mappings def get_test_values(alignments): test_values = [] - for hard_regions_index in alignments.keys(): + for hard_regions_index in list(alignments.keys()): soft_regions_list = [] -RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/align_util.py for soft_regions_index in alignments[hard_regions_index].keys(): + for soft_regions_index in list(alignments[hard_regions_index].keys()): soft_regions_list.extend(alignments[hard_regions_index][soft_regions_index].alignment_mappings) soft_regions_list.reverse() test_values.extend(soft_regions_list) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/align_regions.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/align_regions.py --- ../python3/nltk_contrib/nltk_contrib/align/align_regions.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/align_regions.py (refactored) @@ -7,8 +7,8 @@ from nltk.metrics import scores -import distance_measures -import alignment_util +from . import distance_measures +from . import alignment_util ##////////////////////////////////////////////////////// ## Alignment @@ -81,7 +81,7 @@ self.soft_regions_index) self.alignment_mappings.append(align_triple) else: - print "not supported alignment type" + print("not supported alignment type") ##////////////////////////////////////////////////////// ## Aligner @@ -132,10 +132,10 @@ (hard_regions2, number_of_hard_regions2) = tmp.find_sub_regions(self.hard_delimiter) if (number_of_hard_regions1 != number_of_hard_regions2): - print "align_regions: input files do not contain the same number of hard regions" + '\n' - print "%s" % hard_delimiter + '\n' - print "%s has %d and %s has %d" % (self.input_file1, number_of_hard_regions1, \ - self.input_file2, number_of_hard_regions2) + '\n' + print("align_regions: input files do not contain the same number of hard regions" + '\n') + print("%s" % hard_delimiter + '\n') + print("%s has %d and %s has %d" % (self.input_file1, number_of_hard_regions1, \ + self.input_file2, number_of_hard_regions2) + '\n') return @@ -225,7 +225,7 @@ path_x = [[0] * second_len for c in range(first_len)] path_y = [[0] * second_len for c in range(first_len)] - d1 = d2 = d3 = d4 = d5 = d6 = sys.maxint + d1 = d2 = d3 = d4 = d5 = d6 = sys.maxsize for j in range(0, ny + 1): for i in range(0, nx + 1): @@ -234,46 +234,46 @@ d1 = distances[i-1][j-1] + \ dist_funct(x[i-1], y[j-1], 0, 0) else: - d1 = sys.maxint + d1 = sys.maxsize if (i > 0): #/* deletion */ d2 = distances[i-1][j] + \ dist_funct(x[i-1], 0, 0, 0) else: - d2 = sys.maxint + d2 = sys.maxsize if (j > 0): #/* insertion */ d3 = distances[i][j-1] + \ dist_funct(0, y[j-1], 0, 0) else: - d3 = sys.maxint + d3 = sys.maxsize if (i > 1 and j > 0): #/* contraction */ d4 = distances[i-2][j-1] + \ dist_funct(x[i-2], y[j-1], x[i-1], 0) else: - d4 = sys.maxint + d4 = sys.maxsize if (i > 0 and j > 1): #/* expansion */ d5 = distances[i-1][j-2] + \ dist_funct(x[i-1], y[j-2], 0, y[j-1]) else: - d5 = sys.maxint + d5 = sys.maxsize if (i > 1 and j > 1): #/* melding */ d6 = distances[i-2][j-2] + \ dist_funct(x[i-2], y[j-2], x[i-1], y[j-1]) else: - d6 = sys.maxint + d6 = sys.maxsize dmin = min(d1, d2, d3, d4, d5, d6) - if (dmin == sys.maxint): + if (dmin == sys.maxsize): distances[i][j] = 0 elif (dmin == d1): distances[i][j] = d1 @@ -502,7 +502,7 @@ RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/align_regions.py accuracy = scores.accuracy(reference_values, test_values) - print "accuracy: %.2f" % accuracy + print("accuracy: %.2f" % accuracy) def demo(): """ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/align.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/align.py --- ../python3/nltk_contrib/nltk_contrib/align/align.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/align.py (refactored) @@ -7,16 +7,16 @@ # For license information, see LICENSE.TXT import sys -from itertools import izip + from nltk.metrics import scores ## --NLTK-- ## Import the nltk.aligner module, which defines the aligner interface -from api import * - -import distance_measures -import align_util +from .api import * + +from . import distance_measures +from . import align_util # Based on Gale & Church 1993, "A Program for Aligning Sentences in Bilingual Corpora" # This is a Python version of the C implementation by Mike Riley presented in the appendix @@ -82,10 +82,10 @@ hard_regions2 = align_util.get_paragraphs_sentences(lines2, hard_delimiter, soft_delimiter) if (len(hard_regions1) != len(hard_regions2)): - print "align_regions: input files do not contain the same number of hard regions" + '\n' - print "%s" % hard_delimiter + '\n' - print "%s has %d and %s has %d" % (input_file1, len(hard_regions1), \ - input_file2, len(hard_regions2) + '\n') + print("align_regions: input files do not contain the same number of hard regions" + '\n') + print("%s" % hard_delimiter + '\n') + print("%s has %d and %s has %d" % (input_file1, len(hard_regions1), \ + input_file2, len(hard_regions2) + '\n')) return ([],[]) return (hard_regions1, hard_regions2) @@ -154,7 +154,7 @@ path_x = [[0] * second_len for c in range(first_len)] path_y = [[0] * second_len for c in range(first_len)] - d1 = d2 = d3 = d4 = d5 = d6 = sys.maxint + d1 = d2 = d3 = d4 = d5 = d6 = sys.maxsize for j in range(0, ny + 1): for i in range(0, nx + 1): @@ -163,46 +163,46 @@ d1 = distances[i-1][j-1] + \ self.dist_funct(x[i-1], y[j-1], 0, 0) else: - d1 = sys.maxint + d1 = sys.maxsize if (i > 0): #/* deletion */ d2 = distances[i-1][j] + \ self.dist_funct(x[i-1], 0, 0, 0) else: - d2 = sys.maxint + d2 = sys.maxsize if (j > 0): #/* insertion */ d3 = distances[i][j-1] + \ self.dist_funct(0, y[j-1], 0, 0) else: - d3 = sys.maxint + d3 = sys.maxsize if (i > 1 and j > 0): #/* contraction */ d4 = distances[i-2][j-1] + \ self.dist_funct(x[i-2], y[j-1], x[i-1], 0) else: - d4 = sys.maxint + d4 = sys.maxsize if (i > 0 and j > 1): #/* expansion */ d5 = distances[i-1][j-2] + \ self.dist_funct(x[i-1], y[j-2], 0, y[j-1]) else: - d5 = sys.maxint + d5 = sys.maxsize if (i > 1 and j > 1): #/* melding */ d6 = distances[i-2][j-2] + \ self.dist_funct(x[i-2], y[j-2], x[i-1], y[j-1]) else: - d6 = sys.maxint + d6 = sys.maxsize dmin = min(d1, d2, d3, d4, d5, d6) - if (dmin == sys.maxint): + if (dmin == sys.maxsize): distances[i][j] = 0 elif (dmin == d1): distances[i][j] = d1 @@ -341,7 +341,7 @@ path_x = [[0] * second_len for c in range(first_len)] path_y = [[0] * second_len for c in range(first_len)] - d1 = d2 = d3 = d4 = d5 = d6 = d7 = d8 = d9 = d10 = d11 = sys.maxint + d1 = d2 = d3 = d4 = d5 = d6 = d7 = d8 = d9 = d10 = d11 = sys.maxsize for j in range(0, ny + 1): for i in range(0, nx + 1): @@ -350,81 +350,81 @@ d1 = distances[i-1][j-1] + \ self.dist_funct(x[i-1], y[j-1], 0, 0, 0, 0) else: - d1 = sys.maxint + d1 = sys.maxsize if (i > 0): #/* deletion */ /* 1-0 */ d2 = distances[i-1][j] + \ self.dist_funct(x[i-1], 0, 0, 0, 0, 0) else: - d2 = sys.maxint + d2 = sys.maxsize if (j > 0): #/* insertion */ /* 0-1 */ d3 = distances[i][j-1] + \ self.dist_funct(0, y[j-1], 0, 0, 0, 0) else: - d3 = sys.maxint + d3 = sys.maxsize if (i > 1 and j > 0): #/* contraction */ /* 2-1 */ d4 = distances[i-2][j-1] + \ self.dist_funct(x[i-2], y[j-1], x[i-1], 0, 0, 0) else: - d4 = sys.maxint + d4 = sys.maxsize if (i > 0 and j > 1): #/* expansion */ /* 1-2 */ d5 = distances[i-1][j-2] + \ self.dist_funct(x[i-1], y[j-2], 0, y[j-1], 0, 0) else: - d5 = sys.maxint + d5 = sys.maxsize if (i > 1 and j > 1): #/* melding */ /* 2-2 */ d6 = distances[i-2][j-2] + \ self.dist_funct(x[i-2], y[j-2], x[i-1], y[j-1], 0, 0) else: - d6 = sys.maxint + d6 = sys.maxsize if (i > 2 and j > 0): #/* contraction */ /* 3-1 */ d7 = distances[i-3][j-1] + \ self.dist_funct(x[i-3], y[j-1], x[i-2], 0, x[i-1], 0) else: - d7 = sys.maxint + d7 = sys.maxsize if (i > 2 and j > 1): #/* contraction */ /* 3-2 */ d8 = distances[i-3][j-2] + \ self.dist_funct(x[i-3], y[j-1], x[i-2], y[j-2], x[i-1], 0) else: - d8 = sys.maxint + d8 = sys.maxsize if (i > 0 and j > 2): #/* expansion */ /* 1-3 */ d9 = distances[i-1][j-3] + \ self.dist_funct(x[i-1], y[j-3], 0, y[j-2], 0, y[j-1]) else: - d9 = sys.maxint + d9 = sys.maxsize if (i > 1 and j > 2): #/* expansion */ /* 2-3 */ d10 = distances[i-2][j-3] + \ self.dist_funct(x[i-3], y[j-3], x[i-2], y[j-2], 0, y[j-1]) else: - d10 = sys.maxint + d10 = sys.maxsize if (i > 2 and j > 2): #/* melding */ /* 3-3 */ d11 = distances[i-3][j-3] + \ self.dist_funct(x[i-3], y[j-3], x[i-2], y[j-2], x[i-1], y[j-1]) RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/align.py else: - d11 = sys.maxint + d11 = sys.maxsize dmin = min(d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11) - if (dmin == sys.maxint): + if (dmin == sys.maxsize): distances[i][j] = 0 elif (dmin == d1): distances[i][j] = d1 @@ -619,13 +619,13 @@ """ alignment_mappings = align_util2.get_alignment_links(alignments) - print "Alignment mappings: %s" % alignment_mappings + print("Alignment mappings: %s" % alignment_mappings) #test_values = align_util.get_test_values(alignments) reference_values = align_util2.get_reference_values(gold_file) - print "Reference values: %s" % reference_values + print("Reference values: %s" % reference_values) #accuracy = scores.accuracy(reference_values, test_values) @@ -653,7 +653,7 @@ gc_alignment = gc.batch_align(regions1, regions2) - print "Alignment0: %s" % gc_alignment + print("Alignment0: %s" % gc_alignment) demo_eval(gc_alignment, gold_file) @@ -675,7 +675,7 @@ gc_alignment = gc.batch_align(regions1, regions2) - print "Alignment1: %s" % gc_alignment + print("Alignment1: %s" % gc_alignment) demo_eval(gc_alignment, gold_file) @@ -694,7 +694,7 @@ standard_alignment2 = std.batch_align(s2, t2) - print "Alignment2: %s" % standard_alignment2 + print("Alignment2: %s" % standard_alignment2) # demo 4 @@ -703,14 +703,14 @@ standard_alignment3 = std.align(s3, t3) - print "Alignment3: %s" % standard_alignment3 + print("Alignment3: %s" % standard_alignment3) # demo 5 top_down_alignments = std.recursive_align(s3, t3) for alignment in top_down_alignments: - print "Top down align: %s" % alignment + print("Top down align: %s" % alignment) if __name__=='__main__': demo() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/align/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk_contrib/nltk_contrib/align/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk_contrib/nltk_contrib/align/__init__.py --- ../python3/nltk_contrib/nltk_contrib/align/__init__.py (original) +++ ../python3/nltk_contrib/nltk_contrib/align/__init__.py (refactored) @@ -9,8 +9,8 @@ Classes and interfaces for aligning text. """ -from api import * -from gale_church import * +from .api import * +from .gale_church import * __all__ = [] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk_contrib/nltk_contrib/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/setup.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/setup.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/setup.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/wsd.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/wsd.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/wsd.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/util.py --- ../python3/nltk/util.py (original) +++ ../python3/nltk/util.py (refactored) @@ -4,7 +4,7 @@ # Author: Steven Bird # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + import locale import re @@ -485,7 +485,7 @@ return self.__missing__(key) def __iter__(self): - return (key for key in self.keys()) + return (key for key in list(self.keys())) def __missing__(self, key): if not self._default_factory and key not in self._keys: @@ -508,7 +508,7 @@ def items(self): # returns iterator under python 3 and list under python 2 - return zip(self.keys(), self.values()) + return list(zip(list(self.keys()), list(self.values()))) def keys(self, data=None, keys=None): if data: @@ -521,7 +521,7 @@ isinstance(data, OrderedDict) or \ isinstance(data, list) if isinstance(data, dict) or isinstance(data, OrderedDict): - return data.keys() + return list(data.keys()) elif isinstance(data, list): return [key for (key, value) in data] elif '_keys' in self.__dict__: @@ -551,7 +551,7 @@ def values(self): # returns iterator under python 3 - return map(self.get, self._keys) + return list(map(self.get, self._keys)) ###################################################################### # Lazy Sequences @@ -983,7 +983,7 @@ :param lst: the underlying list :type lst: list """ - LazyZip.__init__(self, range(len(lst)), lst) + LazyZip.__init__(self, list(range(len(lst))), lst) ###################################################################### + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/treetransforms.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/treetransforms.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/treetransforms.py --- ../python3/nltk/treetransforms.py (original) +++ ../python3/nltk/treetransforms.py (refactored) @@ -106,7 +106,7 @@ C D C D """ -from __future__ import print_function + from nltk.tree import Tree + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tree.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tree.py --- ../python3/nltk/tree.py (original) +++ ../python3/nltk/tree.py (refactored) @@ -13,7 +13,7 @@ Class for representing hierarchical language structures, such as syntax trees and morphological trees. """ -from __future__ import print_function, unicode_literals + # TODO: add LabelledTree (can be used for dependency trees) @@ -331,7 +331,7 @@ :type filter: function :param filter: the function to filter all local trees """ - if not filter or filter(self): + if not filter or list(filter(self)): yield self for child in self: if isinstance(child, Tree): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/toolbox.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/toolbox.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/toolbox.py --- ../python3/nltk/toolbox.py (original) +++ ../python3/nltk/toolbox.py (refactored) @@ -10,7 +10,7 @@ Module for reading, writing and manipulating Toolbox databases and settings files. """ -from __future__ import print_function + import os, re, codecs from xml.etree.ElementTree import ElementTree, TreeBuilder, Element, SubElement @@ -414,7 +414,7 @@ :type field_orders: dict(tuple) """ order_dicts = dict() - for field, order in field_orders.items(): + for field, order in list(field_orders.items()): order_dicts[field] = order_key = dict() for i, subfield in enumerate(order): order_key[subfield] = i + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tokenize/util.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tokenize/util.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/treebank.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tokenize/treebank.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tokenize/treebank.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/texttiling.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tokenize/texttiling.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tokenize/texttiling.py --- ../python3/nltk/tokenize/texttiling.py (original) +++ ../python3/nltk/tokenize/texttiling.py (refactored) @@ -144,8 +144,7 @@ def _block_comparison(self, tokseqs, token_table): "Implements the block comparison method" def blk_frq(tok, block): - ts_occs = filter(lambda o: o[0] in block, - token_table[tok].ts_occurences) + ts_occs = [o for o in token_table[tok].ts_occurences if o[0] in block] freq = sum([tsocc[1] for tsocc in ts_occs]) return freq @@ -282,9 +281,9 @@ else: cutoff = avg-stdev/2.0 - depth_tuples = sorted(zip(depth_scores, range(len(depth_scores)))) + depth_tuples = sorted(zip(depth_scores, list(range(len(depth_scores))))) depth_tuples.reverse() - hp = filter(lambda x:x[0]>cutoff, depth_tuples) + hp = [x for x in depth_tuples if x[0]>cutoff] for dt in hp: boundaries[dt[1]] = 1 @@ -441,10 +440,10 @@ s,ss,d,b=tt.tokenize(text) pylab.xlabel("Sentence Gap index") pylab.ylabel("Gap Scores") - pylab.plot(range(len(s)), s, label="Gap Scores") - pylab.plot(range(len(ss)), ss, label="Smoothed Gap scores") - pylab.plot(range(len(d)), d, label="Depth scores") - pylab.stem(range(len(b)),b) + pylab.plot(list(range(len(s))), s, label="Gap Scores") + pylab.plot(list(range(len(ss))), ss, label="Smoothed Gap scores") + pylab.plot(list(range(len(d))), d, label="Depth scores") + pylab.stem(list(range(len(b))),b) pylab.legend() pylab.show() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/stanford.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tokenize/stanford.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tokenize/stanford.py --- ../python3/nltk/tokenize/stanford.py (original) +++ ../python3/nltk/tokenize/stanford.py (refactored) @@ -7,7 +7,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals, print_function + import tempfile import os @@ -47,7 +47,7 @@ self._encoding = encoding self.java_options = java_options options = {} if options is None else options - self._options_cmd = ','.join('{0}={1}'.format(key, json.dumps(val)) for key, val in options.items()) + self._options_cmd = ','.join('{0}={1}'.format(key, json.dumps(val)) for key, val in list(options.items())) @staticmethod def _parse_tokenized_output(s): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/simple.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tokenize/simple.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tokenize/simple.py --- ../python3/nltk/tokenize/simple.py (original) +++ ../python3/nltk/tokenize/simple.py (refactored) @@ -34,7 +34,7 @@ to specify the tokenization conventions when building a `CorpusReader`. """ -from __future__ import unicode_literals + from nltk.tokenize.api import TokenizerI, StringTokenizer from nltk.tokenize.util import string_span_tokenize, regexp_span_tokenize + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/sexpr.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tokenize/sexpr.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tokenize/sexpr.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/regexp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tokenize/regexp.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tokenize/regexp.py --- ../python3/nltk/tokenize/regexp.py (original) +++ ../python3/nltk/tokenize/regexp.py (refactored) @@ -65,7 +65,7 @@ ``re`` functions, where the pattern is always the first argument. (This is for consistency with the other NLTK tokenizers.) """ -from __future__ import unicode_literals + import re import sre_constants + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/punkt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tokenize/punkt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tokenize/punkt.py --- ../python3/nltk/tokenize/punkt.py (original) +++ ../python3/nltk/tokenize/punkt.py (refactored) @@ -111,7 +111,7 @@ Kiss, Tibor and Strunk, Jan (2006): Unsupervised Multilingual Sentence Boundary Detection. Computational Linguistics 32: 485-525. """ -from __future__ import print_function, unicode_literals + # TODO: Make orthographic heuristic less susceptible to overtraining # TODO: Frequent sentence starters optionally exclude always-capitalised words + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tokenize/api.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tokenize/api.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tokenize/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tokenize/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tokenize/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/text.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/text.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/text.py --- ../python3/nltk/text.py (original) +++ ../python3/nltk/text.py (refactored) @@ -13,7 +13,7 @@ regular expression search over tokenized strings, and distributional similarity. """ -from __future__ import print_function, division, unicode_literals + from math import log from collections import defaultdict @@ -51,7 +51,7 @@ else: self._context_func = self._default_context if filter: - tokens = [t for t in tokens if filter(t)] + tokens = [t for t in tokens if list(filter(t))] self._word_to_contexts = CFD((self._key(w), self._context_func(tokens, i)) for i, w in enumerate(tokens)) self._context_to_words = CFD((self._context_func(tokens, i), self._key(w)) @@ -75,7 +75,7 @@ word_contexts = set(self._word_to_contexts[word]) scores = {} - for w, w_contexts in self._word_to_contexts.items(): + for w, w_contexts in list(self._word_to_contexts.items()): scores[w] = f_measure(word_contexts, set(w_contexts)) return scores + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/wordnet_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/wordnet_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/wordnet_fixt.py --- ../python3/nltk/test/wordnet_fixt.py (original) +++ ../python3/nltk/test/wordnet_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + def teardown_module(module=None): from nltk.corpus import wordnet + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/utils.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/utils.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/utils.py --- ../python3/nltk/test/unit/utils.py (original) +++ ../python3/nltk/test/unit/utils.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + from unittest import TestCase from functools import wraps from nose.plugins.skip import SkipTest + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_tag.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_tag.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_tag.py --- ../python3/nltk/test/unit/test_tag.py (original) +++ ../python3/nltk/test/unit/test_tag.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import, unicode_literals + def test_basic(): from nltk.tag import pos_tag + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_stem.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_stem.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_stem.py --- ../python3/nltk/test/unit/test_stem.py (original) +++ ../python3/nltk/test/unit/test_stem.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import print_function, unicode_literals + import unittest from nltk.stem.snowball import SnowballStemmer + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_seekable_unicode_stream_reader.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_seekable_unicode_stream_reader.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_seekable_unicode_stream_reader.py --- ../python3/nltk/test/unit/test_seekable_unicode_stream_reader.py (original) +++ ../python3/nltk/test/unit/test_seekable_unicode_stream_reader.py (refactored) @@ -3,7 +3,7 @@ The following test performs a random series of reads, seeks, and tells, and checks that the results are consistent. """ -from __future__ import absolute_import, unicode_literals + import random import functools from io import BytesIO + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_naivebayes.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_naivebayes.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_naivebayes.py --- ../python3/nltk/test/unit/test_naivebayes.py (original) +++ ../python3/nltk/test/unit/test_naivebayes.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import print_function, unicode_literals + import unittest + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_hmm.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_hmm.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_hmm.py --- ../python3/nltk/test/unit/test_hmm.py (original) +++ ../python3/nltk/test/unit/test_hmm.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import, unicode_literals + from nltk.tag import hmm def _wikipedia_example_hmm(): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_corpus_views.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_corpus_views.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_corpus_views.py --- ../python3/nltk/test/unit/test_corpus_views.py (original) +++ ../python3/nltk/test/unit/test_corpus_views.py (refactored) @@ -2,7 +2,7 @@ """ Corpus View Regression Tests """ -from __future__ import absolute_import, unicode_literals + import unittest import nltk.data from nltk.corpus.reader.util import (StreamBackedCorpusView, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_corpora.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_corpora.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_corpora.py --- ../python3/nltk/test/unit/test_corpora.py (original) +++ ../python3/nltk/test/unit/test_corpora.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import, unicode_literals + import unittest from nltk.corpus import (sinica_treebank, conll2007, indian, cess_cat, cess_esp, floresta, ptb, udhr) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_collocations.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_collocations.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_collocations.py --- ../python3/nltk/test/unit/test_collocations.py (original) +++ ../python3/nltk/test/unit/test_collocations.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import, unicode_literals + import unittest from nltk.collocations import BigramCollocationFinder + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_classify.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_classify.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_classify.py --- ../python3/nltk/test/unit/test_classify.py (original) +++ ../python3/nltk/test/unit/test_classify.py (refactored) @@ -2,7 +2,7 @@ """ Unit tests for nltk.classify. See also: nltk/test/classify.doctest """ -from __future__ import absolute_import + from nose import SkipTest from nltk import classify + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/test_2x_compat.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/unit/test_2x_compat.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/unit/test_2x_compat.py --- ../python3/nltk/test/unit/test_2x_compat.py (original) +++ ../python3/nltk/test/unit/test_2x_compat.py (refactored) @@ -3,7 +3,7 @@ Unit tests for nltk.compat. See also nltk/test/compat.doctest. """ -from __future__ import absolute_import, unicode_literals + import unittest from nltk.text import Text + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/unit/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/semantics_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/semantics_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/semantics_fixt.py --- ../python3/nltk/test/semantics_fixt.py (original) +++ ../python3/nltk/test/semantics_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + # reset the variables counter before running tests def setup_module(module): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/segmentation_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/segmentation_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/segmentation_fixt.py --- ../python3/nltk/test/segmentation_fixt.py (original) +++ ../python3/nltk/test/segmentation_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + # skip segmentation.doctest if numpy is not available + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/runtests.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/runtests.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/runtests.py --- ../python3/nltk/test/runtests.py (original) +++ ../python3/nltk/test/runtests.py (refactored) @@ -1,6 +1,6 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -from __future__ import absolute_import, print_function + import sys import os import nose + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/probability_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/probability_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/probability_fixt.py --- ../python3/nltk/test/probability_fixt.py (original) +++ ../python3/nltk/test/probability_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + # probability.doctest uses HMM which requires numpy; # skip probability.doctest if numpy is not available + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/portuguese_en_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/portuguese_en_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/portuguese_en_fixt.py --- ../python3/nltk/test/portuguese_en_fixt.py (original) +++ ../python3/nltk/test/portuguese_en_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + from nltk.compat import PY3 from nltk.corpus import teardown_module + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/nonmonotonic_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/nonmonotonic_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/nonmonotonic_fixt.py --- ../python3/nltk/test/nonmonotonic_fixt.py (original) +++ ../python3/nltk/test/nonmonotonic_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + def setup_module(module): from nose import SkipTest + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/inference_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/inference_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/inference_fixt.py --- ../python3/nltk/test/inference_fixt.py (original) +++ ../python3/nltk/test/inference_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + def setup_module(module): from nose import SkipTest + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/gluesemantics_malt_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/gluesemantics_malt_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/gluesemantics_malt_fixt.py --- ../python3/nltk/test/gluesemantics_malt_fixt.py (original) +++ ../python3/nltk/test/gluesemantics_malt_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + def setup_module(module): from nose import SkipTest + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/doctest_nose_plugin.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/doctest_nose_plugin.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/doctest_nose_plugin.py --- ../python3/nltk/test/doctest_nose_plugin.py (original) +++ ../python3/nltk/test/doctest_nose_plugin.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import print_function + from nose.suite import ContextList import re import sys + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/discourse_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/discourse_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/discourse_fixt.py --- ../python3/nltk/test/discourse_fixt.py (original) +++ ../python3/nltk/test/discourse_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + # FIXME: the entire discourse.doctest is skipped if Prover9/Mace4 is # not installed, but there are pure-python parts that don't need Prover9. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/corpus_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/corpus_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/corpus_fixt.py --- ../python3/nltk/test/corpus_fixt.py (original) +++ ../python3/nltk/test/corpus_fixt.py (refactored) @@ -1,4 +1,4 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + from nltk.corpus import teardown_module + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/compat_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/compat_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/compat_fixt.py --- ../python3/nltk/test/compat_fixt.py (original) +++ ../python3/nltk/test/compat_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + from nltk.compat import PY3 def setup_module(module): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/classify_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/classify_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/classify_fixt.py --- ../python3/nltk/test/classify_fixt.py (original) +++ ../python3/nltk/test/classify_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + # most of classify.doctest requires numpy def setup_module(module): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/childes_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/childes_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/childes_fixt.py --- ../python3/nltk/test/childes_fixt.py (original) +++ ../python3/nltk/test/childes_fixt.py (refactored) @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + def setup_module(module): from nose import SkipTest + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/all.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/test/all.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/test/all.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/align_fixt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/test/align_fixt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/test/align_fixt.py --- ../python3/nltk/test/align_fixt.py (original) +++ ../python3/nltk/test/align_fixt.py (refactored) @@ -1,4 +1,4 @@ # -*- coding: utf-8 -*- -from __future__ import absolute_import + from nltk.corpus import teardown_module + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/test/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/test/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/test/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tbl/template.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tbl/template.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tbl/template.py --- ../python3/nltk/tbl/template.py (original) +++ ../python3/nltk/tbl/template.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + import itertools as it from nltk.tbl.feature import Feature + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tbl/rule.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tbl/rule.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tbl/rule.py --- ../python3/nltk/tbl/rule.py (original) +++ ../python3/nltk/tbl/rule.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + from nltk.compat import python_2_unicode_compatible, unicode_repr from nltk import jsontags + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tbl/feature.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tbl/feature.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tbl/feature.py --- ../python3/nltk/tbl/feature.py (original) +++ ../python3/nltk/tbl/feature.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import division, print_function, unicode_literals + class Feature(object): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tbl/erroranalysis.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tbl/erroranalysis.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tbl/erroranalysis.py --- ../python3/nltk/tbl/erroranalysis.py (original) +++ ../python3/nltk/tbl/erroranalysis.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + # returns a list of errors in string format + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tbl/demo.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tbl/demo.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tbl/demo.py --- ../python3/nltk/tbl/demo.py (original) +++ ../python3/nltk/tbl/demo.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, absolute_import, division + import os import pickle + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tbl/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tbl/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tbl/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tbl/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tag/util.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tag/util.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/tnt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/tnt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/tnt.py --- ../python3/nltk/tag/tnt.py (original) +++ ../python3/nltk/tag/tnt.py (refactored) @@ -12,7 +12,7 @@ http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf ''' -from __future__ import print_function + from math import log from operator import itemgetter @@ -209,7 +209,7 @@ # for each t3 given t1,t2 in system # (NOTE: tag actually represents (tag,C)) # However no effect within this function - for tag in self._tri[history].keys(): + for tag in list(self._tri[history].keys()): # if there has only been 1 occurrence of this tag in the data # then ignore this trigram. @@ -363,7 +363,7 @@ for (history, curr_sent_logprob) in current_states: logprobs = [] - for t in self._wd[word].keys(): + for t in list(self._wd[word].keys()): p_uni = self._uni.freq((t,C)) p_bi = self._bi[history[-1]].freq((t,C)) p_tri = self._tri[tuple(history[-2:])].freq((t,C)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/stanford.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tag/stanford.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tag/stanford.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/sequential.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/sequential.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/sequential.py --- ../python3/nltk/tag/sequential.py (original) +++ ../python3/nltk/tag/sequential.py (refactored) @@ -17,7 +17,7 @@ consulted instead. Any SequentialBackoffTagger may serve as a backoff tagger for any other SequentialBackoffTagger. """ -from __future__ import print_function, unicode_literals + import re @@ -175,7 +175,7 @@ # Count how many times each tag occurs in each context. fd = ConditionalFreqDist() for sentence in tagged_corpus: - tokens, tags = zip(*sentence) + tokens, tags = list(zip(*sentence)) for index, (token, tag) in enumerate(sentence): # Record the event. token_count += 1 @@ -531,7 +531,7 @@ SequentialBackoffTagger.__init__(self, backoff) labels = ['g'+str(i) for i in range(len(regexps))] tags = [tag for regex, tag in regexps] - self._map = dict(zip(labels, tags)) + self._map = dict(list(zip(labels, tags))) regexps_labels = [(regex, label) for ((regex,tag),label) in zip(regexps,labels)] self._regexs = re.compile('|'.join('(?P<%s>%s)' % (label, regex) for regex,label in regexps_labels)) self._size=len(regexps) @@ -655,7 +655,7 @@ for sentence in tagged_corpus: history = [] - untagged_sentence, tags = zip(*sentence) + untagged_sentence, tags = list(zip(*sentence)) for index in range(len(sentence)): featureset = self.feature_detector(untagged_sentence, index, history) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/senna.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tag/senna.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tag/senna.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/mapping.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/mapping.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/mapping.py --- ../python3/nltk/tag/mapping.py (original) +++ ../python3/nltk/tag/mapping.py (refactored) @@ -29,7 +29,7 @@ """ -from __future__ import print_function, unicode_literals, division + from collections import defaultdict from os.path import join + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/hunpos.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tag/hunpos.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tag/hunpos.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/hmm.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/hmm.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/hmm.py --- ../python3/nltk/tag/hmm.py (original) +++ ../python3/nltk/tag/hmm.py (refactored) @@ -68,7 +68,7 @@ For more information, please consult the source code for this module, which includes extensive demonstration code. """ -from __future__ import print_function, unicode_literals, division + import re import itertools @@ -269,7 +269,7 @@ def _tag(self, unlabeled_sequence): path = self._best_path(unlabeled_sequence) - return list(izip(unlabeled_sequence, path)) + return list(zip(unlabeled_sequence, path)) def _output_logprob(self, state, symbol): """ @@ -778,10 +778,10 @@ return list(itertools.chain(*seq)) test_sequence = self._transform(test_sequence) - predicted_sequence = list(imap(self._tag, imap(words, test_sequence))) + predicted_sequence = list(map(self._tag, map(words, test_sequence))) if verbose: - for test_sent, predicted_sent in izip(test_sequence, predicted_sequence): + for test_sent, predicted_sent in zip(test_sequence, predicted_sequence): print('Test:', ' '.join('%s/%s' % (token, tag) for (token, tag) in test_sent)) @@ -799,8 +799,8 @@ print() print('-' * 60) - test_tags = flatten(imap(tags, test_sequence)) - predicted_tags = flatten(imap(tags, predicted_sequence)) + test_tags = flatten(map(tags, test_sequence)) + predicted_tags = flatten(map(tags, predicted_sequence)) acc = accuracy(test_tags, predicted_tags) count = sum(len(sent) for sent in test_sequence) @@ -1117,7 +1117,7 @@ def _create_hmm_tagger(states, symbols, A, B, pi): def pd(values, samples): - d = dict(zip(samples, values)) + d = dict(list(zip(samples, values))) return DictionaryProbDist(d) def cpd(array, conditions, samples): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/brill_trainer_orig.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/brill_trainer_orig.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/brill_trainer_orig.py --- ../python3/nltk/tag/brill_trainer_orig.py (original) +++ ../python3/nltk/tag/brill_trainer_orig.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, division + from collections import defaultdict import textwrap @@ -362,7 +362,7 @@ # Convert the dictionary into a list of (rule, score) tuples, # sorted in descending order of score. - return sorted(rule_score_dict.items(), + return sorted(list(rule_score_dict.items()), key=lambda rule_score: -rule_score[1]) def _find_rules_at(self, test_sent, train_sent, i): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/brill_trainer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/brill_trainer.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/brill_trainer.py --- ../python3/nltk/tag/brill_trainer.py (original) +++ ../python3/nltk/tag/brill_trainer.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, division + import bisect from collections import defaultdict @@ -439,7 +439,7 @@ score *and* which has been tested against the entire corpus, we can conclude that it's the next best rule. """ - for max_score in sorted(self._rules_by_score.keys(), reverse=True): + for max_score in sorted(list(self._rules_by_score.keys()), reverse=True): if len(self._rules_by_score) == 0: return None if max_score < min_score or max_score <= 0: @@ -469,7 +469,7 @@ if min_acc is None: return rule else: - changes = self._positions_by_rule[rule].values() + changes = list(self._positions_by_rule[rule].values()) num_fixed = len([c for c in changes if c==1]) num_broken = len([c for c in changes if c==-1]) #acc here is fixed/(fixed+broken); could also be @@ -566,7 +566,7 @@ # that are harmful or neutral. We therefore need to # update any rule whose first_unknown_position is past # this rule. - for new_rule, pos in self._first_unknown_position.items(): + for new_rule, pos in list(self._first_unknown_position.items()): if pos > (sentnum, wordnum): if new_rule not in old_rules: num_new += 1 @@ -596,7 +596,7 @@ assert self._rule_scores[rule] == \ sum(self._positions_by_rule[rule].values()) - changes = self._positions_by_rule[rule].values() + changes = list(self._positions_by_rule[rule].values()) num_changed = len(changes) num_fixed = len([c for c in changes if c==1]) num_broken = len([c for c in changes if c==-1]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/brill.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/brill.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/brill.py --- ../python3/nltk/tag/brill.py (original) +++ ../python3/nltk/tag/brill.py (refactored) @@ -8,7 +8,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, division + from collections import defaultdict @@ -324,7 +324,7 @@ "final: {finalerrors:5d} {finalacc:.4f} ".format(**train_stats)) head = "#ID | Score (train) | #Rules | Template" print(head, "\n", "-" * len(head), sep="") - train_tplscores = sorted(weighted_traincounts.items(), key=det_tplsort, reverse=True) + train_tplscores = sorted(list(weighted_traincounts.items()), key=det_tplsort, reverse=True) for (tid, trainscore) in train_tplscores: s = "{0:s} | {1:5d} {2:5.3f} |{3:4d} {4:.3f} | {5:s}".format( tid, @@ -349,7 +349,7 @@ tottestscores = sum(testscores) head = "#ID | Score (test) | Score (train) | #Rules | Template" print(head, "\n", "-" * len(head), sep="") - test_tplscores = sorted(weighted_testcounts.items(), key=det_tplsort, reverse=True) + test_tplscores = sorted(list(weighted_testcounts.items()), key=det_tplsort, reverse=True) for (tid, testscore) in test_tplscores: s = "{0:s} |{1:5d} {2:6.3f} | {3:4d} {4:.3f} |{5:4d} {6:.3f} | {7:s}".format( tid, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/tag/api.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/tag/api.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/tag/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/tag/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/tag/__init__.py --- ../python3/nltk/tag/__init__.py (original) +++ ../python3/nltk/tag/__init__.py (refactored) @@ -58,7 +58,7 @@ For more information, please consult chapter 5 of the NLTK Book. """ -from __future__ import print_function + from nltk.tag.api import TaggerI from nltk.tag.util import str2tuple, tuple2str, untag + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/wordnet.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/stem/wordnet.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/stem/wordnet.py --- ../python3/nltk/stem/wordnet.py (original) +++ ../python3/nltk/stem/wordnet.py (refactored) @@ -5,7 +5,7 @@ # Edward Loper # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + from nltk.corpus.reader.wordnet import NOUN from nltk.corpus import wordnet + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/snowball.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/stem/snowball.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/stem/snowball.py --- ../python3/nltk/stem/snowball.py (original) +++ ../python3/nltk/stem/snowball.py (refactored) @@ -18,7 +18,7 @@ There is also a demo function: `snowball.demo()`. """ -from __future__ import unicode_literals, print_function + from nltk import compat from nltk.corpus import stopwords + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/rslp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/stem/rslp.py WARNING: couldn't encode ../python3/nltk/stem/rslp.py's diff for your terminal RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/stem/rslp.py --- ../python3/nltk/stem/rslp.py (original) +++ ../python3/nltk/stem/rslp.py (refactored) @@ -30,7 +30,7 @@ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/regexp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/stem/regexp.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/stem/regexp.py --- ../python3/nltk/stem/regexp.py (original) +++ ../python3/nltk/stem/regexp.py (refactored) @@ -6,7 +6,7 @@ # Steven Bird # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + import re from nltk.stem.api import StemmerI + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/porter.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/stem/porter.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/stem/porter.py --- ../python3/nltk/stem/porter.py (original) +++ ../python3/nltk/stem/porter.py (refactored) @@ -85,7 +85,7 @@ version of this module is maintained by the NLTK developers, and is available from """ -from __future__ import print_function, unicode_literals + ## --NLTK-- ## Declare this module's documentation format. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/lancaster.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/stem/lancaster.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/stem/lancaster.py --- ../python3/nltk/stem/lancaster.py (original) +++ ../python3/nltk/stem/lancaster.py (refactored) @@ -9,7 +9,7 @@ A word stemmer based on the Lancaster stemming algorithm. Paice, Chris D. "Another Stemmer." ACM SIGIR Forum 24.3 (1990): 56-61. """ -from __future__ import unicode_literals + import re from nltk.stem.api import StemmerI + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/isri.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/stem/isri.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/stem/isri.py --- ../python3/nltk/stem/isri.py (original) +++ ../python3/nltk/stem/isri.py (refactored) @@ -29,7 +29,7 @@ increases the word ambiguities and changes the original root. """ -from __future__ import unicode_literals + import re from nltk.stem.api import StemmerI + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/stem/api.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/stem/api.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/stem/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/stem/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/stem/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/util.py --- ../python3/nltk/sem/util.py (original) +++ ../python3/nltk/sem/util.py (refactored) @@ -12,7 +12,7 @@ syntax tree, followed by evaluation of the semantic representation in a first-order model. """ -from __future__ import print_function, unicode_literals + import codecs from nltk.sem import evaluate + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/skolemize.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/sem/skolemize.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/sem/skolemize.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/relextract.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/relextract.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/relextract.py --- ../python3/nltk/sem/relextract.py (original) +++ ../python3/nltk/sem/relextract.py (refactored) @@ -19,7 +19,7 @@ - A clause is an atom of the form ``relsym(subjsym, objsym)``, where the relation, subject and object have been canonicalized to single strings. """ -from __future__ import print_function + # todo: get a more general solution to canonicalized symbols for clauses -- maybe use xmlcharrefs? + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/logic.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/logic.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/logic.py --- ../python3/nltk/sem/logic.py (original) +++ ../python3/nltk/sem/logic.py (refactored) @@ -10,7 +10,7 @@ A version of first order predicate logic, built on top of the typed lambda calculus. """ -from __future__ import print_function, unicode_literals + import re import operator + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/linearlogic.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/linearlogic.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/linearlogic.py --- ../python3/nltk/sem/linearlogic.py (original) +++ ../python3/nltk/sem/linearlogic.py (refactored) @@ -5,7 +5,7 @@ # Copyright (C) 2001-2014 NLTK Project # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + from nltk.internals import Counter from nltk.compat import string_types, python_2_unicode_compatible @@ -347,7 +347,7 @@ self.d = {} if isinstance(bindings, dict): - bindings = bindings.items() + bindings = list(bindings.items()) if bindings: for (v, b) in bindings: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/lfg.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/lfg.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/lfg.py --- ../python3/nltk/sem/lfg.py (original) +++ ../python3/nltk/sem/lfg.py (refactored) @@ -5,7 +5,7 @@ # Copyright (C) 2001-2014 NLTK Project # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, division, unicode_literals + from nltk.internals import Counter from nltk.compat import python_2_unicode_compatible + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/hole.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/hole.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/hole.py --- ../python3/nltk/sem/hole.py (original) +++ ../python3/nltk/sem/hole.py (refactored) @@ -19,7 +19,7 @@ representation that is not easy to read. We use a "plugging" algorithm to convert that representation into first-order logic formulas. """ -from __future__ import print_function, unicode_literals + from functools import reduce @@ -346,7 +346,7 @@ print('Top hole: ', hole_sem.top_hole) print('Top labels: ', hole_sem.top_most_labels) print('Fragments:') - for (l,f) in hole_sem.fragments.items(): + for (l,f) in list(hole_sem.fragments.items()): print('\t%s: %s' % (l, f)) # Find all the possible ways to plug the formulas together. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/glue.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/glue.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/glue.py --- ../python3/nltk/sem/glue.py (original) +++ ../python3/nltk/sem/glue.py (refactored) @@ -5,7 +5,7 @@ # Copyright (C) 2001-2014 NLTK Project # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, division, unicode_literals + import os + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/evaluate.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/evaluate.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/evaluate.py --- ../python3/nltk/sem/evaluate.py (original) +++ ../python3/nltk/sem/evaluate.py (refactored) @@ -13,7 +13,7 @@ This module provides data structures for representing first-order models. """ -from __future__ import print_function, unicode_literals + from pprint import pformat import inspect @@ -37,10 +37,10 @@ def trace(f, *args, **kw): argspec = inspect.getargspec(f) - d = dict(zip(argspec[0], args)) + d = dict(list(zip(argspec[0], args))) if d.pop('trace', None): print() - for item in d.items(): + for item in list(d.items()): print("%s => %s" % item) return f(*args, **kw) @@ -136,7 +136,7 @@ def domain(self): """Set-theoretic domain of the value-space of a Valuation.""" dom = [] - for val in self.values(): + for val in list(self.values()): if isinstance(val, string_types): dom.append(val) elif not isinstance(val, bool): @@ -328,7 +328,7 @@ Create a more pretty-printable version of the assignment. """ list_ = [] - for item in self.items(): + for item in list(self.items()): pair = (item[1], item[0]) list_.append(pair) self.variant = list_ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/drt_glue_demo.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/drt_glue_demo.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/drt_glue_demo.py --- ../python3/nltk/sem/drt_glue_demo.py (original) +++ ../python3/nltk/sem/drt_glue_demo.py (refactored) @@ -164,8 +164,8 @@ self._top.bind('', self.destroy) self._top.bind('', self.destroy) self._top.bind('', self.destroy) - self._top.bind('n', self.next) - self._top.bind('', self.next) + self._top.bind('n', self.__next__) + self._top.bind('', self.__next__) self._top.bind('p', self.prev) self._top.bind('', self.prev) @@ -178,7 +178,7 @@ command=self.prev,).pack(side='left') Button(buttonframe, text='Next', background='#90c0d0', foreground='black', - command=self.next,).pack(side='left') + command=self.__next__,).pack(side='left') def _configure(self, event): self._autostep = 0 @@ -210,7 +210,7 @@ actionmenu = Menu(menubar, tearoff=0) actionmenu.add_command(label='Next', underline=0, - command=self.next, accelerator='n, Space') + command=self.__next__, accelerator='n, Space') actionmenu.add_command(label='Previous', underline=0, command=self.prev, accelerator='p, Backspace') menubar.add_cascade(label='Action', underline=0, menu=actionmenu) @@ -340,7 +340,7 @@ "Written by Daniel H. Garrette") TITLE = 'About: NLTK DRT Glue Demo' try: - from tkMessageBox import Message + from tkinter.messagebox import Message Message(message=ABOUT, title=TITLE).show() except: ShowText(self._top, TITLE, ABOUT) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/drt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/drt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/drt.py --- ../python3/nltk/sem/drt.py (original) +++ ../python3/nltk/sem/drt.py (refactored) @@ -5,7 +5,7 @@ # Copyright (C) 2001-2014 NLTK Project # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + import operator from functools import reduce @@ -417,7 +417,7 @@ def _pretty(self): refs_line = ' '.join(self._order_ref_strings(self.refs)) - cond_lines = [cond for cond_line in [filter(lambda s: s.strip(), cond._pretty()) + cond_lines = [cond for cond_line in [[s for s in cond._pretty() if s.strip()] for cond in self.conds] for cond in cond_line] length = max([len(refs_line)] + list(map(len, cond_lines))) @@ -805,7 +805,7 @@ function, args = self.uncurry() function_lines = function._pretty() args_lines = [arg._pretty() for arg in args] - max_lines = max(map(len, [function_lines] + args_lines)) + max_lines = max(list(map(len, [function_lines] + args_lines))) function_lines = _pad_vertically(function_lines, max_lines) args_lines = [_pad_vertically(arg_lines, max_lines) for arg_lines in args_lines] func_args_lines = list(zip(function_lines, list(zip(*args_lines)))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/cooper_storage.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/cooper_storage.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/cooper_storage.py --- ../python3/nltk/sem/cooper_storage.py (original) +++ ../python3/nltk/sem/cooper_storage.py (refactored) @@ -4,7 +4,7 @@ # Author: Ewan Klein # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + from nltk.sem.logic import LambdaExpression, ApplicationExpression, Variable from nltk.parse import load_parser + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/chat80.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/chat80.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/chat80.py --- ../python3/nltk/sem/chat80.py (original) +++ ../python3/nltk/sem/chat80.py (refactored) @@ -122,7 +122,7 @@ current directory. """ -from __future__ import print_function, unicode_literals + import re import shelve @@ -206,7 +206,7 @@ 'sea': sea } -rels = item_metadata.values() +rels = list(item_metadata.values()) not_unary = ['borders.pl', 'contain.pl'] @@ -563,7 +563,7 @@ The suffix '.db' will be automatically appended. :type db: string """ - concepts = process_bundle(rels).values() + concepts = list(process_bundle(rels).values()) valuation = make_valuation(concepts, read=True) db_out = shelve.open(db, 'n') @@ -680,7 +680,7 @@ rels = [item_metadata[r] for r in items] concept_map = process_bundle(rels) - return concept_map.values() + return list(concept_map.values()) @@ -738,7 +738,7 @@ else: # build some concepts concept_map = process_bundle(rels) - concepts = concept_map.values() + concepts = list(concept_map.values()) # just print out the vocabulary if options.vocab: items = sorted([(c.arity, c.prefLabel) for c in concepts]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/boxer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/sem/boxer.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/sem/boxer.py --- ../python3/nltk/sem/boxer.py (original) +++ ../python3/nltk/sem/boxer.py (refactored) @@ -24,7 +24,7 @@ models/ boxer/ """ -from __future__ import print_function, unicode_literals + import os import re @@ -129,7 +129,7 @@ assert reduce(operator.and_, (id is not None for id in discourse_ids)) use_disc_id = True else: - discourse_ids = list(map(str, range(len(inputs)))) + discourse_ids = list(map(str, list(range(len(inputs))))) use_disc_id = False candc_out = self._call_candc(inputs, discourse_ids, question, verbose=verbose) @@ -663,7 +663,7 @@ self.assertNextToken(DrtTokens.COMMA) sent_id = int(self.token()) self.assertNextToken(DrtTokens.COMMA) - word_ids = map(int, self.handle_refs()) + word_ids = list(map(int, self.handle_refs())) self.assertNextToken(DrtTokens.COMMA) variable = int(self.token()) self.assertNextToken(DrtTokens.COMMA) @@ -722,7 +722,7 @@ self.assertNextToken(DrtTokens.COMMA) sent_id = self.nullableIntToken() self.assertNextToken(DrtTokens.COMMA) - word_ids = map(int, self.handle_refs()) + word_ids = list(map(int, self.handle_refs())) self.assertNextToken(DrtTokens.COMMA) drs1 = self.process_next_expression(None) self.assertNextToken(DrtTokens.COMMA) @@ -748,7 +748,7 @@ self.assertNextToken(DrtTokens.COMMA) sent_id = self.nullableIntToken() self.assertNextToken(DrtTokens.COMMA) - word_ids = map(int, self.handle_refs()) + word_ids = list(map(int, self.handle_refs())) self.assertNextToken(DrtTokens.COMMA) var = int(self.token()) self.assertNextToken(DrtTokens.COMMA) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/sem/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/sem/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/sem/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/probability.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/probability.py --- ../python3/nltk/probability.py (original) +++ ../python3/nltk/probability.py (refactored) @@ -37,7 +37,7 @@ ``ConditionalProbDist``, a derived distribution. """ -from __future__ import print_function, unicode_literals + import math import random @@ -152,7 +152,7 @@ """ _r_Nr = defaultdict(int) - for count in self.values(): + for count in list(self.values()): _r_Nr[count] += 1 # Special case for Nr[0]: @@ -251,7 +251,7 @@ pylab.title(kwargs["title"]) del kwargs["title"] pylab.plot(freqs, **kwargs) - pylab.xticks(range(len(samples)), [compat.text_type(s) for s in samples], rotation=90) + pylab.xticks(list(range(len(samples))), [compat.text_type(s) for s in samples], rotation=90) pylab.xlabel("Samples") pylab.ylabel(ylabel) pylab.show() @@ -549,7 +549,7 @@ for x in prob_dict: self._prob_dict[x] = logp else: - for (x, p) in self._prob_dict.items(): + for (x, p) in list(self._prob_dict.items()): self._prob_dict[x] -= value_sum else: value_sum = sum(self._prob_dict.values()) @@ -559,7 +559,7 @@ self._prob_dict[x] = p else: norm_factor = 1.0/value_sum - for (x, p) in self._prob_dict.items(): + for (x, p) in list(self._prob_dict.items()): self._prob_dict[x] *= norm_factor def prob(self, sample): @@ -578,10 +578,10 @@ def max(self): if not hasattr(self, '_max'): - self._max = max((p,v) for (v,p) in self._prob_dict.items())[1] + self._max = max((p,v) for (v,p) in list(self._prob_dict.items()))[1] return self._max def samples(self): - return self._prob_dict.keys() + return list(self._prob_dict.keys()) def __repr__(self): return '' % len(self._prob_dict) @@ -622,7 +622,7 @@ return self._freqdist.max() def samples(self): - return self._freqdist.keys() + return list(self._freqdist.keys()) def __repr__(self): """ @@ -713,7 +713,7 @@ return self._freqdist.max() def samples(self): - return self._freqdist.keys() + return list(self._freqdist.keys()) def discount(self): gb = self._gamma * self._bins @@ -937,7 +937,7 @@ return self._heldout_fdist def samples(self): - return self._base_fdist.keys() + return list(self._base_fdist.keys()) def prob(self, sample): # Use our precomputed probability estimate. @@ -1098,7 +1098,7 @@ return self._freqdist.max() def samples(self): - return self._freqdist.keys() + return list(self._freqdist.keys()) def freqdist(self): return self._freqdist @@ -1228,7 +1228,7 @@ if not nonzero: return [], [] - return zip(*sorted(nonzero.items())) + return list(zip(*sorted(nonzero.items()))) def find_best_fit(self, r, nr): """ @@ -1373,7 +1373,7 @@ return self._freqdist.max() def samples(self): - return self._freqdist.keys() + return list(self._freqdist.keys()) def freqdist(self): return self._freqdist @@ -1594,7 +1594,7 @@ self._D = discount def samples(self): - return self._trigrams.keys() + return list(self._trigrams.keys()) def max(self): return self._trigrams.max() @@ -1757,7 +1757,7 @@ pylab.legend(loc=legend_loc) pylab.grid(True, color="silver") - pylab.xticks(range(len(samples)), [compat.text_type(s) for s in samples], rotation=90) + pylab.xticks(list(range(len(samples))), [compat.text_type(s) for s in samples], rotation=90) if title: pylab.title(title) pylab.xlabel("Samples") @@ -2200,7 +2200,RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/probability.py 7 @@ sgt = SimpleGoodTuringProbDist(fd) print('%18s %8s %14s' \ % ("word", "freqency", "SimpleGoodTuring")) - fd_keys_sorted=(key for key, value in sorted(fd.items(), key=lambda item: item[1], reverse=True)) + fd_keys_sorted=(key for key, value in sorted(list(fd.items()), key=lambda item: item[1], reverse=True)) for key in fd_keys_sorted: print('%18s %8d %14e' \ % (key, fd[key], sgt.prob(key))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/viterbi.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/viterbi.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/viterbi.py --- ../python3/nltk/parse/viterbi.py (original) +++ ../python3/nltk/parse/viterbi.py (refactored) @@ -5,7 +5,7 @@ # Steven Bird # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + from functools import reduce from nltk.tree import Tree, ProbabilisticTree @@ -375,7 +375,7 @@ print('Time (secs) # Parses Average P(parse)') print('-----------------------------------------') print('%11.4f%11d%19.14f' % (time, num_parses, average)) - parses = all_parses.keys() + parses = list(all_parses.keys()) if parses: p = reduce(lambda a,b:a+b.prob(), parses, 0)/len(parses) else: p = 0 + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/util.py --- ../python3/nltk/parse/util.py (original) +++ ../python3/nltk/parse/util.py (refactored) @@ -10,7 +10,7 @@ """ Utility functions for parsers. """ -from __future__ import print_function + from nltk.grammar import CFG, FeatureGrammar, PCFG from nltk.data import load + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/stanford.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/stanford.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/stanford.py --- ../python3/nltk/parse/stanford.py (original) +++ ../python3/nltk/parse/stanford.py (refactored) @@ -7,7 +7,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + import tempfile import os + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/shiftreduce.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/shiftreduce.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/shiftreduce.py --- ../python3/nltk/parse/shiftreduce.py (original) +++ ../python3/nltk/parse/shiftreduce.py (refactored) @@ -5,7 +5,7 @@ # Steven Bird # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + from nltk.grammar import Nonterminal from nltk.tree import Tree + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/recursivedescent.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/recursivedescent.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/recursivedescent.py --- ../python3/nltk/parse/recursivedescent.py (original) +++ ../python3/nltk/parse/recursivedescent.py (refactored) @@ -5,7 +5,7 @@ # Steven Bird # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + from nltk.grammar import Nonterminal from nltk.tree import Tree, ImmutableTree + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/projectivedependencyparser.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/projectivedependencyparser.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/projectivedependencyparser.py --- ../python3/nltk/parse/projectivedependencyparser.py (original) +++ ../python3/nltk/parse/projectivedependencyparser.py (refactored) @@ -6,7 +6,7 @@ # URL: # For license information, see LICENSE.TXT # -from __future__ import print_function, unicode_literals + from collections import defaultdict + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/pchart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/pchart.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/pchart.py --- ../python3/nltk/parse/pchart.py (original) +++ ../python3/nltk/parse/pchart.py (refactored) @@ -29,7 +29,7 @@ argument beam_size. If non-zero, this controls the size of the beam (aka the edge queue). This option is most useful with InsideChartParser. """ -from __future__ import print_function, unicode_literals + ##////////////////////////////////////////////////////// ## Bottom-Up PCFG Chart Parser @@ -456,7 +456,7 @@ print('%18s %4d |%11.4f%11d%19.14f' % (parsers[i].__class__.__name__, parsers[i].beam_size, times[i],num_parses[i],average_p[i])) - parses = all_parses.keys() + parses = list(all_parses.keys()) if parses: p = reduce(lambda a,b:a+b.prob(), parses, 0)/len(parses) else: p = 0 print('------------------------+------------------------------------------') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/nonprojectivedependencyparser.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/nonprojectivedependencyparser.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/nonprojectivedependencyparser.py --- ../python3/nltk/parse/nonprojectivedependencyparser.py (original) +++ ../python3/nltk/parse/nonprojectivedependencyparser.py (refactored) @@ -6,7 +6,7 @@ # URL: # For license information, see LICENSE.TXT # -from __future__ import print_function + import math @@ -553,7 +553,7 @@ orig_length = len(possible_heads[i]) # print len(possible_heads[i]) if index_on_stack and orig_length == 0: - for j in xrange(len(stack) -1, -1, -1): + for j in range(len(stack) -1, -1, -1): stack_item = stack[j] if stack_item[0] == i: possible_heads[i].append(stack.pop(j)[1]) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/malt.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/malt.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/malt.py --- ../python3/nltk/parse/malt.py (original) +++ ../python3/nltk/parse/malt.py (refactored) @@ -5,7 +5,7 @@ # Copyright (C) 2001-2014 NLTK Project # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + import os import tempfile @@ -79,7 +79,7 @@ '/usr/local/share/malt-1*'] # Expand wildcards in _malt_path: - malt_path = reduce(add, map(glob.glob, _malt_path)) + malt_path = reduce(add, list(map(glob.glob, _malt_path))) # Find the malt binary. self._malt_bin = find_binary('malt.jar', bin, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/generate.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/generate.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/generate.py --- ../python3/nltk/parse/generate.py (original) +++ ../python3/nltk/parse/generate.py (refactored) @@ -7,7 +7,7 @@ # URL: # For license information, see LICENSE.TXT # -from __future__ import print_function + import itertools import sys + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/featurechart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/featurechart.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/featurechart.py --- ../python3/nltk/parse/featurechart.py (original) +++ ../python3/nltk/parse/featurechart.py (refactored) @@ -11,7 +11,7 @@ Extension of chart parsing implementation to handle grammars with feature structures as nodes. """ -from __future__ import print_function, unicode_literals + from nltk.compat import xrange, python_2_unicode_compatible from nltk.featstruct import FeatStruct, unify, TYPE, find_variables @@ -187,7 +187,7 @@ A helper function for ``insert``, which registers the new edge with all existing indexes. """ - for (restr_keys, index) in self._indexes.items(): + for (restr_keys, index) in list(self._indexes.items()): vals = tuple(self._get_type_if_possible(getattr(edge, key)()) for key in restr_keys) index.setdefault(vals, []).append(edge) @@ -399,7 +399,7 @@ class FeatureEmptyPredictRule(EmptyPredictRule): def apply(self, chart, grammar): for prod in grammar.productions(empty=True): - for index in xrange(chart.num_leaves() + 1): + for index in range(chart.num_leaves() + 1): new_edge = FeatureTreeEdge.from_production(prod, index) if chart.insert(new_edge, ()): yield new_edge + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/earleychart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/earleychart.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/earleychart.py --- ../python3/nltk/parse/earleychart.py (original) +++ ../python3/nltk/parse/earleychart.py (refactored) @@ -25,7 +25,7 @@ The main parser class is ``EarleyChartParser``, which is a top-down algorithm, originally formulated by Jay Earley (1970). """ -from __future__ import print_function, division + from nltk.compat import xrange from nltk.parse.chart import (Chart, ChartParser, EdgeI, LeafEdge, LeafInitRule, @@ -100,7 +100,7 @@ def _register_with_indexes(self, edge): end = edge.end() - for (restr_keys, index) in self._indexes.items(): + for (restr_keys, index) in list(self._indexes.items()): vals = tuple(getattr(edge, key)() for key in restr_keys) index[end].setdefault(vals, []).append(edge) @@ -108,7 +108,7 @@ self._edgelists[edge.end()].append(edge) def _positions(self): - return xrange(self.num_leaves() + 1) + return range(self.num_leaves() + 1) class FeatureIncrementalChart(IncrementalChart, FeatureChart): @@ -149,7 +149,7 @@ def _register_with_indexes(self, edge): end = edge.end() - for (restr_keys, index) in self._indexes.items(): + for (restr_keys, index) in list(self._indexes.items()): vals = tuple(self._get_type_if_possible(getattr(edge, key)()) for key in restr_keys) index[end].setdefault(vals, []).append(edge) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/dependencygraph.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/dependencygraph.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/dependencygraph.py --- ../python3/nltk/parse/dependencygraph.py (original) +++ ../python3/nltk/parse/dependencygraph.py (refactored) @@ -13,7 +13,7 @@ The input is assumed to be in Malt-TAB format (http://stp.lingfil.uu.se/~nivre/research/MaltXML.html). """ -from __future__ import print_function, unicode_literals + import re from pprint import pformat + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/chart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/parse/chart.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/parse/chart.py --- ../python3/nltk/parse/chart.py (original) +++ ../python3/nltk/parse/chart.py (refactored) @@ -35,7 +35,7 @@ - ``SteppingChartParser`` is a subclass of ``ChartParser`` that can be used to step through the parsing process. """ -from __future__ import print_function, division, unicode_literals + import itertools import re @@ -564,7 +564,7 @@ A helper function for ``insert``, which registers the new edge with all existing indexes. """ - for (restr_keys, index) in self._indexes.items(): + for (restr_keys, index) in list(self._indexes.items()): vals = tuple(getattr(edge, key)() for key in restr_keys) index.setdefault(vals, []).append(edge) @@ -711,7 +711,7 @@ :rtype: list(list(EdgeI)) """ # Make a copy, in case they modify it. - return self._edge_to_cpls.get(edge, {}).keys() + return list(self._edge_to_cpls.get(edge, {}).keys()) #//////////////////////////////////////////////////////////// # Display @@ -1673,7 +1673,7 @@ print() maxlen = max(len(key) for key in times) format = '%' + repr(maxlen) + 's parser: %6.3fsec' - times_items = times.items() + times_items = list(times.items()) for (parser, t) in sorted(times_items, key=lambda a:a[1]): print(format % (parser, t)) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/parse/api.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/parse/api.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/parse/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/parse/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/parse/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/misc/wordfinder.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/misc/wordfinder.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/misc/wordfinder.py --- ../python3/nltk/misc/wordfinder.py (original) +++ ../python3/nltk/misc/wordfinder.py (refactored) @@ -7,7 +7,7 @@ # Simplified from PHP version by Robert Klein # http://fswordfinder.sourceforge.net/ -from __future__ import print_function + import random + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/misc/sort.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/misc/sort.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/misc/sort.py --- ../python3/nltk/misc/sort.py (original) +++ ../python3/nltk/misc/sort.py (refactored) @@ -10,7 +10,7 @@ illustrate the many different algorithms (recipes) for solving a problem, and how to analyze algorithms experimentally. """ -from __future__ import print_function, division + # These algorithms are taken from: # Levitin (2004) The Design and Analysis of Algorithms + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/misc/minimalset.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/misc/minimalset.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/misc/minimalset.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/misc/chomsky.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/misc/chomsky.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/misc/chomsky.py --- ../python3/nltk/misc/chomsky.py (original) +++ ../python3/nltk/misc/chomsky.py (refactored) @@ -12,7 +12,7 @@ (CHOMSKY n) -- for example (CHOMSKY 5) generates half a screen of linguistic truth. """ -from __future__ import print_function + leadins = """To characterize a linguistic level L, On the other hand, @@ -126,7 +126,7 @@ phraselist = list(map(str.strip, part.splitlines())) random.shuffle(phraselist) parts.append(phraselist) - output = chain(*islice(izip(*parts), 0, times)) + output = chain(*islice(zip(*parts), 0, times)) print(textwrap.fill(" ".join(output), line_length)) if __name__ == '__main__': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/misc/babelfish.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/misc/babelfish.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/misc/babelfish.py --- ../python3/nltk/misc/babelfish.py (original) +++ ../python3/nltk/misc/babelfish.py (refactored) @@ -4,7 +4,7 @@ module is kept in NLTK source code in order to provide better error messages for people following the NLTK Book 2.0. """ -from __future__ import print_function + def babelize_shell(): print("Babelfish online translation service is no longer available.") + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/misc/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/spearman.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/metrics/spearman.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/metrics/spearman.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/segmentation.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/metrics/segmentation.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/metrics/segmentation.py --- ../python3/nltk/metrics/segmentation.py (original) +++ ../python3/nltk/metrics/segmentation.py (refactored) @@ -213,7 +213,7 @@ k = int(round(len(ref) / (ref.count(boundary) * 2.))) err = 0 - for i in xrange(len(ref)-k +1): + for i in range(len(ref)-k +1): r = ref[i:i+k].count(boundary) > 0 h = hyp[i:i+k].count(boundary) > 0 if r != h: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/scores.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/metrics/scores.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/metrics/scores.py --- ../python3/nltk/metrics/scores.py (original) +++ ../python3/nltk/metrics/scores.py (refactored) @@ -5,7 +5,7 @@ # Steven Bird # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + from math import fabs import operator @@ -37,7 +37,7 @@ """ if len(reference) != len(test): raise ValueError("Lists must have the same length.") - return float(sum(x == y for x, y in izip(reference, test))) / len(test) + return float(sum(x == y for x, y in zip(reference, test))) / len(test) def precision(reference, test): """ @@ -132,7 +132,7 @@ # Return the average value of dist.logprob(val). total_likelihood = sum(dist.logprob(val) - for (val, dist) in izip(reference, test)) + for (val, dist) in zip(reference, test)) return total_likelihood/len(reference) def approxrand(a, b, **kwargs): @@ -159,7 +159,7 @@ shuffles = kwargs.get('shuffles', 999) # there's no point in trying to shuffle beyond all possible permutations shuffles = \ - min(shuffles, reduce(operator.mul, xrange(1, len(a) + len(b) + 1))) + min(shuffles, reduce(operator.mul, range(1, len(a) + len(b) + 1))) stat = kwargs.get('statistic', lambda lst: float(sum(lst)) / len(lst)) verbose = kwargs.get('verbose', False) @@ -176,7 +176,7 @@ lst = LazyConcatenation([a, b]) indices = list(range(len(a) + len(b))) - for i in xrange(shuffles): + for i in range(shuffles): if verbose and i % 10 == 0: print('shuffle: %d' % i) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/paice.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/metrics/paice.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/metrics/paice.py --- ../python3/nltk/metrics/paice.py (original) +++ ../python3/nltk/metrics/paice.py (refactored) @@ -351,11 +351,11 @@ } print('Words grouped by their lemmas:') for lemma in sorted(lemmas): - print('%s => %s' % (lemma, ' '.join(lemmas[lemma]))) + print(('%s => %s' % (lemma, ' '.join(lemmas[lemma])))) print() print('Same words grouped by a stemming algorithm:') for stem in sorted(stems): - print('%s => %s' % (stem, ' '.join(stems[stem]))) + print(('%s => %s' % (stem, ' '.join(stems[stem])))) print() p = Paice(lemmas, stems) print(p) @@ -370,7 +370,7 @@ } print('Counting stats after changing stemming results:') for stem in sorted(stems): - print('%s => %s' % (stem, ' '.join(stems[stem]))) + print(('%s => %s' % (stem, ' '.join(stems[stem])))) print() p.stems = stems p.update() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/distance.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/metrics/distance.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/metrics/distance.py --- ../python3/nltk/metrics/distance.py (original) +++ ../python3/nltk/metrics/distance.py (refactored) @@ -19,7 +19,7 @@ 3. d(a, c) <= d(a, b) + d(b, c) """ -from __future__ import print_function + def _edit_dist_init(len1, len2): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/confusionmatrix.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/metrics/confusionmatrix.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/metrics/confusionmatrix.py --- ../python3/nltk/metrics/confusionmatrix.py (original) +++ ../python3/nltk/metrics/confusionmatrix.py (refactored) @@ -5,7 +5,7 @@ # Steven Bird # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + from nltk.probability import FreqDist from nltk.compat import python_2_unicode_compatible + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/association.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/metrics/association.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/metrics/association.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/agreement.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/metrics/agreement.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/metrics/agreement.py --- ../python3/nltk/metrics/agreement.py (original) +++ ../python3/nltk/metrics/agreement.py (refactored) @@ -68,7 +68,7 @@ 1.0 """ -from __future__ import print_function, unicode_literals + import logging from itertools import groupby @@ -109,9 +109,9 @@ self.load_array(data) def __str__(self): - return "\r\n".join(map(lambda x:"%s\t%s\t%s" % + return "\r\n".join(["%s\t%s\t%s" % (x['coder'], x['item'].replace('_', "\t"), - ",".join(x['labels'])), self.data)) + ",".join(x['labels'])) for x in self.data]) def load_array(self, array): """Load the results of annotation. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/metrics/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/metrics/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/metrics/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/lazyimport.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/lazyimport.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/lazyimport.py --- ../python3/nltk/lazyimport.py (original) +++ ../python3/nltk/lazyimport.py (refactored) @@ -14,7 +14,7 @@ See the documentation for further information on copyrights, or contact the author. All Rights Reserved. """ -from __future__ import print_function + ### Constants + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/jsontags.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/jsontags.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/jsontags.py --- ../python3/nltk/jsontags.py (original) +++ ../python3/nltk/jsontags.py (refactored) @@ -45,13 +45,13 @@ def decode_obj(cls, obj): #Decode nested objects first. if isinstance(obj, dict): - obj=dict((key, cls.decode_obj(val)) for (key, val) in obj.items()) + obj=dict((key, cls.decode_obj(val)) for (key, val) in list(obj.items())) elif isinstance(obj, list): obj=list(cls.decode_obj(val) for val in obj) #Check if we have a tagged object. if not isinstance(obj, dict) or len(obj) != 1: return obj - obj_tag = next(iter(obj.keys())) + obj_tag = next(iter(list(obj.keys()))) if not obj_tag.startswith('!'): return obj if not obj_tag in json_tags: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/internals.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/internals.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/internals.py --- ../python3/nltk/internals.py (original) +++ ../python3/nltk/internals.py (refactored) @@ -6,7 +6,7 @@ # Nitin Madnani # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + import subprocess import os + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/tableau.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/inference/tableau.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/inference/tableau.py --- ../python3/nltk/inference/tableau.py (original) +++ ../python3/nltk/inference/tableau.py (refactored) @@ -9,7 +9,7 @@ """ Module for a tableau-based First Order theorem prover. """ -from __future__ import print_function, unicode_literals + from nltk.internals import Counter + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/resolution.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/inference/resolution.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/inference/resolution.py --- ../python3/nltk/inference/resolution.py (original) +++ ../python3/nltk/inference/resolution.py (refactored) @@ -9,7 +9,7 @@ """ Module for a resolution-based First Order theorem prover. """ -from __future__ import print_function, unicode_literals + import operator from collections import defaultdict + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/prover9.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/inference/prover9.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/inference/prover9.py --- ../python3/nltk/inference/prover9.py (original) +++ ../python3/nltk/inference/prover9.py (refactored) @@ -9,7 +9,7 @@ """ A theorem prover that makes use of the external 'Prover9' package. """ -from __future__ import print_function + import os import subprocess + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/nonmonotonic.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/inference/nonmonotonic.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/inference/nonmonotonic.py --- ../python3/nltk/inference/nonmonotonic.py (original) +++ ../python3/nltk/inference/nonmonotonic.py (refactored) @@ -11,7 +11,7 @@ this module are based on "Logical Foundations of Artificial Intelligence" by Michael R. Genesereth and Nils J. Nilsson. """ -from __future__ import print_function, unicode_literals + from nltk.inference.prover9 import Prover9, Prover9Command from collections import defaultdict + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/mace.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/inference/mace.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/inference/mace.py --- ../python3/nltk/inference/mace.py (original) +++ ../python3/nltk/inference/mace.py (refactored) @@ -9,7 +9,7 @@ """ A model builder that makes use of the external 'Mace4' package. """ -from __future__ import print_function + import os import tempfile + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/discourse.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/inference/discourse.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/inference/discourse.py --- ../python3/nltk/inference/discourse.py (original) +++ ../python3/nltk/inference/discourse.py (refactored) @@ -42,7 +42,7 @@ (This is not intended to scale beyond very short discourses!) The method ``readings(filter=True)`` will only show those threads which are consistent (taking into account any background assumptions). """ -from __future__ import print_function + import os from operator import and_, add @@ -295,7 +295,7 @@ self._filtered_threads = {} # keep the same ids, but only include threads which get models consistency_checked = self._check_consistency(self._threads) - for (tid, thread) in self._threads.items(): + for (tid, thread) in list(self._threads.items()): if (tid, True) in consistency_checked: self._filtered_threads[tid] = thread + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/inference/api.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/inference/api.py --- ../python3/nltk/inference/api.py (original) +++ ../python3/nltk/inference/api.py (refactored) @@ -17,7 +17,7 @@ goal *G*, the model builder tries to find a counter-model, in the sense of a model that will satisfy the assumptions plus the negation of *G*. """ -from __future__ import print_function + import threading import time @@ -216,7 +216,7 @@ :type retracted: list(sem.Expression) """ retracted = set(retracted) - result_list = list(filter(lambda a: a not in retracted, self._assumptions)) + result_list = list([a for a in self._assumptions if a not in retracted]) if debug and result_list == self._assumptions: print(Warning("Assumptions list has not been changed:")) self.print_assumptions() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/inference/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/inference/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/inference/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/help.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/help.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/help.py --- ../python3/nltk/help.py (original) +++ ../python3/nltk/help.py (refactored) @@ -8,7 +8,7 @@ """ Provide structured access to documentation. """ -from __future__ import print_function + import re from textwrap import wrap + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/grammar.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/grammar.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/grammar.py --- ../python3/nltk/grammar.py (original) +++ ../python3/nltk/grammar.py (refactored) @@ -68,7 +68,7 @@ with the right hand side (*rhs*) in a tree (*tree*) is known as "expanding" *lhs* to *rhs* in *tree*. """ -from __future__ import print_function, unicode_literals + import re @@ -493,8 +493,8 @@ self._leftcorners = lc self._leftcorner_parents = invert_graph(lc) - nr_leftcorner_categories = sum(map(len, self._immediate_leftcorner_categories.values())) - nr_leftcorner_words = sum(map(len, self._immediate_leftcorner_words.values())) + nr_leftcorner_categories = sum(map(len, list(self._immediate_leftcorner_categories.values()))) + nr_leftcorner_words = sum(map(len, list(self._immediate_leftcorner_words.values()))) if nr_leftcorner_words > nr_leftcorner_categories > 10000: # If the grammar is big, the leftcorner-word dictionary will be too large. # In that case it is better to calculate the relation on demand. @@ -550,7 +550,7 @@ if not empty: return self._productions else: - return self._empty_index.values() + return list(self._empty_index.values()) # only lhs specified so look up its index elif lhs and not rhs: @@ -1078,7 +1078,7 @@ for production in productions: probs[production.lhs()] = (probs.get(production.lhs(), 0) + production.prob()) - for (lhs, p) in probs.items(): + for (lhs, p) in list(probs.items()): if not ((1-PCFG.EPSILON) < p < (1+PCFG.EPSILON)): raise ValueError("Productions for %r do not sum to 1" % lhs) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/featstruct.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/featstruct.py --- ../python3/nltk/featstruct.py (original) +++ ../python3/nltk/featstruct.py (refactored) @@ -88,7 +88,7 @@ or if you plan to use them as dictionary keys, it is strongly recommended that you use full-fledged ``FeatStruct`` objects. """ -from __future__ import print_function, unicode_literals, division + import re import copy @@ -98,6 +98,7 @@ LogicParser, LogicalExpressionException) from nltk.compat import (string_types, integer_types, total_ordering, python_2_unicode_compatible, unicode_repr) +import collections ###################################################################### # Feature Structure @@ -690,8 +691,8 @@ if self._frozen: raise ValueError(_FROZEN_ERROR) if features is None: items = () - elif hasattr(features, 'items') and callable(features.items): - items = features.items() + elif hasattr(features, 'items') and isinstance(features.items, collections.Callable): + items = list(features.items()) elif hasattr(features, '__iter__'): items = features else: @@ -701,7 +702,7 @@ if not isinstance(key, (string_types, Feature)): raise TypeError('Feature names must be strings') self[key] = val - for key, val in morefeatures.items(): + for key, val in list(morefeatures.items()): if not isinstance(key, (string_types, Feature)): raise TypeError('Feature names must be strings') self[key] = val @@ -720,9 +721,9 @@ #{ Uniform Accessor Methods ##//////////////////////////////////////////////////////////// - def _keys(self): return self.keys() - def _values(self): return self.values() - def _items(self): return self.items() + def _keys(self): return list(self.keys()) + def _values(self): return list(self.values()) + def _items(self): return list(self.items()) ##//////////////////////////////////////////////////////////// #{ String Representations @@ -806,7 +807,7 @@ return ['[]'] # What's the longest feature name? Use this to align names. - maxfnamelen = max(len("%s" % k) for k in self.keys()) + maxfnamelen = max(len("%s" % k) for k in list(self.keys())) lines = [] # sorting note: keys are unique strings, so we'll never fall @@ -1045,7 +1046,7 @@ if id(fstruct) in visited: return visited.add(id(fstruct)) - if _is_mapping(fstruct): items = fstruct.items() + if _is_mapping(fstruct): items = list(fstruct.items()) elif _is_sequence(fstruct): items = enumerate(fstruct) else: raise ValueError('Expected mapping or sequence') for (fname, fval) in items: @@ -1071,7 +1072,7 @@ if fs_class == 'default': fs_class = _default_fs_class(fstruct) (fstruct, new_bindings) = copy.deepcopy((fstruct, bindings)) bindings.update(new_bindings) - inv_bindings = dict((id(val),var) for (var,val) in bindings.items()) + inv_bindings = dict((id(val),var) for (var,val) in list(bindings.items())) _retract_bindings(fstruct, inv_bindings, fs_class, set()) return fstruct @@ -1080,7 +1081,7 @@ if id(fstruct) in visited: return visited.add(id(fstruct)) - if _is_mapping(fstruct): items = fstruct.items() + if _is_mapping(fstruct): items = list(fstruct.items()) elif _is_sequence(fstruct): items = enumerate(fstruct) else: raise ValueError('Expected mapping or sequence') for (fname, fval) in items: @@ -1102,7 +1103,7 @@ # Visit each node only once: if id(fstruct) in visited: return visited.add(id(fstruct)) - if _is_mapping(fstruct): items = fstruct.items() + if _is_mapping(fstruct): items = list(fstruct.items()) elif _is_sequence(fstruct): items = enumerate(fstruct) else: raise ValueError('Expected mapping or sequence') for (fname, fval) in items: @@ -1174,7 +1175,7 @@ def _rename_variables(fstruct, vars, used_vars, new_vars, fs_class, visited): if id(fstruct) inRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/featstruct.py visited: return visited.add(id(fstruct)) - if _is_mapping(fstruct): items = fstruct.items() + if _is_mapping(fstruct): items = list(fstruct.items()) elif _is_sequence(fstruct): items = enumerate(fstruct) else: raise ValueError('Expected mapping or sequence') for (fname, fval) in items: @@ -1559,7 +1560,7 @@ Replace any feature structure that has a forward pointer with the target of its forward pointer (to preserve reentrancy). """ - for (var, value) in bindings.items(): + for (var, value) in list(bindings.items()): while id(value) in forward: value = forward[id(value)] bindings[var] = value @@ -1576,7 +1577,7 @@ if id(fstruct) in visited: return visited.add(id(fstruct)) - if _is_mapping(fstruct): items = fstruct.items() + if _is_mapping(fstruct): items = list(fstruct.items()) elif _is_sequence(fstruct): items = enumerate(fstruct) else: raise ValueError('Expected mapping or sequence') for fname, fval in items: @@ -1595,7 +1596,7 @@ Replace any bound aliased vars with their binding; and replace any unbound aliased vars with their representative var. """ - for (var, value) in bindings.items(): + for (var, value) in list(bindings.items()): while isinstance(value, Variable) and value in bindings: value = bindings[var] = bindings[value] @@ -1625,7 +1626,7 @@ def _trace_bindings(path, bindings): # Print the bindings (if any). if len(bindings) > 0: - binditems = sorted(bindings.items(), key=lambda v:v[0].name) + binditems = sorted(list(bindings.items()), key=lambda v:v[0].name) bindstr = '{%s}' % ', '.join( '%s: %s' % (var, _trace_valrepr(val)) for (var, val) in binditems) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/draw/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/draw/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/draw/util.py --- ../python3/nltk/draw/util.py (original) +++ ../python3/nltk/draw/util.py (refactored) @@ -601,7 +601,7 @@ try: cb(self) except: - print('Error in drag callback for %r' % self) + print(('Error in drag callback for %r' % self)) elif self.__parent is not None: self.__parent.__drag() + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/draw/tree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/draw/tree.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/draw/tree.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/draw/table.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/draw/table.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/draw/table.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/draw/dispersion.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/draw/dispersion.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/draw/dispersion.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/draw/cfg.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/draw/cfg.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/draw/cfg.py --- ../python3/nltk/draw/cfg.py (original) +++ ../python3/nltk/draw/cfg.py (refactored) @@ -258,13 +258,13 @@ if (prod_tuples[i][0] == prod_tuples[i-1][0]): if () in prod_tuples[i][1]: continue if () in prod_tuples[i-1][1]: continue - print(prod_tuples[i-1][1]) - print(prod_tuples[i][1]) + print((prod_tuples[i-1][1])) + print((prod_tuples[i][1])) prod_tuples[i-1][1].extend(prod_tuples[i][1]) del prod_tuples[i] for lhs, rhss in prod_tuples: - print(lhs, rhss) + print((lhs, rhss)) s = '%s ->' % lhs for rhs in rhss: for elt in rhs: @@ -624,7 +624,7 @@ else: break else: # Everything matched! - print('MATCH AT', i) + print(('MATCH AT', i)) #////////////////////////////////////////////////// # Grammar + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/draw/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/draw/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/draw/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/downloader.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/downloader.py --- ../python3/nltk/downloader.py (original) +++ ../python3/nltk/downloader.py (refactored) @@ -67,7 +67,7 @@ python -m nltk.downloader [-d DATADIR] [-q] [-f] [-k] PACKAGE_IDS """ #---------------------------------------------------------------------- -from __future__ import print_function, division, unicode_literals + """ @@ -484,21 +484,21 @@ def packages(self): self._update_index() - return self._packages.values() + return list(self._packages.values()) def corpora(self): self._update_index() - return [pkg for (id,pkg) in self._packages.items() + return [pkg for (id,pkg) in list(self._packages.items()) if pkg.subdir == 'corpora'] def models(self): self._update_index() - return [pkg for (id,pkg) in self._packages.items() + return [pkg for (id,pkg) in list(self._packages.items()) if pkg.subdir != 'corpora'] def collections(self): self._update_index() - return self._collections.values() + return list(self._collections.values()) #///////////////////////////////////////////////////////////////// # Downloading @@ -837,7 +837,7 @@ self._collections = dict((c.id, c) for c in collections) # Replace identifiers with actual children in collection.children. - for collection in self._collections.values(): + for collection in list(self._collections.values()): for i, child_id in enumerate(collection.children): if child_id in self._packages: collection.children[i] = self._packages[child_id] @@ -845,7 +845,7 @@ collection.children[i] = self._collections[child_id] # Fill in collection.packages for each collection. - for collection in self._collections.values(): + for collection in list(self._collections.values()): packages = {} queue = [collection] for child in queue: @@ -853,7 +853,7 @@ queue.extend(child.children) else: packages[child.id] = child - collection.packages = packages.values() + collection.packages = list(packages.values()) # Flush the status cache self._status_cache.clear() @@ -1398,7 +1398,7 @@ self.top.config(menu=menubar) def _select_columns(self): - for (column, var) in self._column_vars.items(): + for (column, var) in list(self._column_vars.items()): if var.get(): self._table.show_column(column) else: @@ -1423,7 +1423,7 @@ def _info_save(self, e=None): focus = self._table - for entry, callback in self._info.values(): + for entry, callback in list(self._info.values()): if entry['state'] == 'disabled': continue if e is not None and e.widget is entry and e.keysym != 'Return': focus = entry @@ -1469,12 +1469,12 @@ def _show_info(self): print('showing info', self._ds.url) - for entry,cb in self._info.values(): + for entry,cb in list(self._info.values()): entry['state'] = 'normal' entry.delete(0, 'end') self._info['url'][0].insert(0, self._ds.url) self._info['download_dir'][0].insert(0, self._ds.download_dir) - for entry,cb in self._info.values(): + for entry,cb in list(self._info.values()): entry['state'] = 'disabled' def _prev_tab(self, *e): @@ -1528,7 +1528,7 @@ self._table.extend(rows) # Highlight the active tab. - for tab, label in self._tabs.items(): + for tab, label in list(self._tabs.items()): if tab == self._tab: label.configure(foreground=self._FRONT_TAB_COLOR[0], background=self._FRONT_TAB_COLOR[1]) @@ -1687,7 +1687,7 @@ def _destroy(self, *e): if self.top is not None: - for afterid in self._afterid.vaRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/downloader.py lues(): + for afterid in list(self._afterid.values()): self.top.after_cancel(afterid) # Abort any download in progress. @@ -1747,7 +1747,7 @@ "Written by Edward Loper") TITLE = 'About: NLTK Downloader' try: - from tkMessageBox import Message + from tkinter.messagebox import Message Message(message=ABOUT, title=TITLE).show() except ImportError: ShowText(self._top, TITLE, ABOUT) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/decorators.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/decorators.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/decorators.py --- ../python3/nltk/decorators.py (original) +++ ../python3/nltk/decorators.py (refactored) @@ -5,7 +5,7 @@ Included in NLTK for its support of a nice memoization decorator. """ -from __future__ import print_function + __docformat__ = 'restructuredtext en' ## The basic trick is to generate the source code for the decorated function @@ -70,8 +70,8 @@ _closure = func.__closure__ _globals = func.__globals__ else: - _closure = func.func_closure - _globals = func.func_globals + _closure = func.__closure__ + _globals = func.__globals__ return dict(name=func.__name__, argnames=argnames, signature=signature, defaults = func.__defaults__, doc=func.__doc__, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/data.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/data.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/data.py --- ../python3/nltk/data.py (original) +++ ../python3/nltk/data.py (refactored) @@ -30,8 +30,8 @@ adds it to a resource cache; and ``retrieve()`` copies a given resource to a local file. """ -from __future__ import print_function, unicode_literals -from __future__ import division + + import sys import io @@ -49,7 +49,7 @@ from zlib import Z_FINISH as FLUSH try: - import cPickle as pickle + import pickle as pickle except ImportError: import pickle @@ -783,7 +783,7 @@ resource_val = json.load(opened_resource) tag = None if len(resource_val) != 1: - tag = next(resource_val.keys()) + tag = next(list(resource_val.keys())) if tag not in json_tags: raise ValueError('Unknown json tag.') elif format == 'yaml': @@ -1135,14 +1135,14 @@ """ return self.read().splitlines(keepends) - def next(self): + def __next__(self): """Return the next decoded line from the underlying stream.""" line = self.readline() if line: return line else: raise StopIteration def __next__(self): - return self.next() + return next(self) def __iter__(self): """Return self""" + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/util.py --- ../python3/nltk/corpus/util.py (original) +++ ../python3/nltk/corpus/util.py (refactored) @@ -9,7 +9,7 @@ #{ Lazy Corpus Loader ###################################################################### -from __future__ import unicode_literals + import re import gc import nltk + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/ycoe.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/ycoe.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/ycoe.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/xmldocs.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/xmldocs.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/xmldocs.py --- ../python3/nltk/corpus/reader/xmldocs.py (original) +++ ../python3/nltk/corpus/reader/xmldocs.py (refactored) @@ -10,7 +10,7 @@ (note -- not named 'xml' to avoid conflicting w/ standard xml package) """ -from __future__ import print_function, unicode_literals + import codecs + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/wordnet.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/wordnet.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/wordnet.py --- ../python3/nltk/corpus/reader/wordnet.py (original) +++ ../python3/nltk/corpus/reader/wordnet.py (refactored) @@ -20,7 +20,7 @@ http://wordnet.princeton.edu/ """ -from __future__ import print_function, unicode_literals + import math import re @@ -418,7 +418,7 @@ self._wordnet_corpus_reader._load_lang_data(lang) i = self._wordnet_corpus_reader.ss2of(self) - for x in self._wordnet_corpus_reader._lang_data[lang][0].keys(): + for x in list(self._wordnet_corpus_reader._lang_data[lang][0].keys()): if x == i: return self._wordnet_corpus_reader._lang_data[lang][0][x] @@ -993,7 +993,7 @@ #{ Part of speech constants _pos_numbers = {NOUN: 1, VERB: 2, ADJ: 3, ADV: 4, ADJ_SAT: 5} - _pos_names = dict(tup[::-1] for tup in _pos_numbers.items()) + _pos_names = dict(tup[::-1] for tup in list(_pos_numbers.items())) #} #: A list of file identifiers for all the fileids used by this @@ -1064,7 +1064,7 @@ if lang not in self.langs(): raise WordNetError("Language is not supported.") - if lang in self._lang_data.keys(): + if lang in list(self._lang_data.keys()): return f = self._omw_reader.open('{0:}/wn-data-{0:}.tab'.format(lang)) @@ -1095,7 +1095,7 @@ def _load_lemma_pos_offset_map(self): - for suffix in self._FILEMAP.values(): + for suffix in list(self._FILEMAP.values()): # parse each line of the file (ignoring comment lines) for i, line in enumerate(self.open('index.%s' % suffix)): @@ -1116,7 +1116,7 @@ # get the pointer symbols for all synsets of this lemma n_pointers = int(_next_token()) - _ = [_next_token() for _ in xrange(n_pointers)] + _ = [_next_token() for _ in range(n_pointers)] # same as number of synsets n_senses = int(_next_token()) @@ -1126,7 +1126,7 @@ _ = int(_next_token()) # get synset offsets - synset_offsets = [int(_next_token()) for _ in xrange(n_synsets)] + synset_offsets = [int(_next_token()) for _ in range(n_synsets)] # raise more informative error with file name and line number except (AssertionError, ValueError) as e: @@ -1140,7 +1140,7 @@ def _load_exception_map(self): # load the exception file data into memory - for pos, suffix in self._FILEMAP.items(): + for pos, suffix in list(self._FILEMAP.items()): self._exception_map[pos] = {} for line in self.open('%s.exc' % suffix): terms = line.split() @@ -1305,7 +1305,7 @@ # create Lemma objects for each lemma n_lemmas = int(_next_token(), 16) - for _ in xrange(n_lemmas): + for _ in range(n_lemmas): # get the lemma name lemma_name = _next_token() # get the lex_id (used for sense_keys) @@ -1321,7 +1321,7 @@ # collect the pointer tuples n_pointers = int(_next_token()) - for _ in xrange(n_pointers): + for _ in range(n_pointers): symbol = _next_token() offset = int(_next_token()) pos = _next_token() @@ -1342,7 +1342,7 @@ except StopIteration: pass else: - for _ in xrange(frame_count): + for _ in range(frame_count): # read the plus sign plus = _next_token() assert plus == '+' @@ -1475,7 +1475,7 @@ will be loaded. """ if pos is None: - pos_tags = self._FILEMAP.keys() + pos_tags = list(self._FILEMAP.keys()) else: pos_tags = [pos] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/wordlist.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/wordlist.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/wordlist.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/verbnet.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/verbnet.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/verbnet.py --- ../python3/nltk/corpus/reader/verbnet.py (original) +++ ../python3/nltk/corpus/reader/verbnet.py (refactored) @@ -11,7 +11,7 @@ For details about VerbNet see: http://verbs.colorado.edu/~mpalmer/projects/verbnet.html """ -from __future__ import unicode_literals + import re import textwrap @@ -100,7 +100,7 @@ raise ValueError('Specify at most one of: fileid, wordnetid, ' 'fileid, classid') if fileid is not None: - return [c for (c,f) in self._class_to_fileid.items() + return [c for (c,f) in list(self._class_to_fileid.items()) if f == fileid] elif lemma is not None: return self._lemma_to_class[lemma] + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/util.py --- ../python3/nltk/corpus/reader/util.py (original) +++ ../python3/nltk/corpus/reader/util.py (refactored) @@ -12,7 +12,7 @@ import tempfile from functools import reduce try: - import cPickle as pickle + import pickle as pickle except ImportError: import pickle + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/udhr.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/udhr.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/udhr.py --- ../python3/nltk/corpus/reader/udhr.py (original) +++ ../python3/nltk/corpus/reader/udhr.py (refactored) @@ -2,7 +2,7 @@ """ UDHR corpus reader. It mostly deals with encodings. """ -from __future__ import absolute_import, unicode_literals + from nltk.corpus.reader.util import find_corpus_fileids from nltk.corpus.reader.plaintext import PlaintextCorpusReader + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/toolbox.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/toolbox.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/toolbox.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/timit.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/timit.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/timit.py --- ../python3/nltk/corpus/reader/timit.py (original) +++ ../python3/nltk/corpus/reader/timit.py (refactored) @@ -118,7 +118,7 @@ timit.audiodata function. """ -from __future__ import print_function, unicode_literals + import sys import os @@ -402,9 +402,9 @@ # Method 2: pygame try: # FIXME: this won't work under python 3 - import pygame.mixer, StringIO + import pygame.mixer, io pygame.mixer.init(16000) - f = StringIO.StringIO(self.wav(utterance, start, end)) + f = io.StringIO(self.wav(utterance, start, end)) pygame.mixer.Sound(f).play() while pygame.mixer.get_busy(): time.sleep(0.01) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/tagged.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/tagged.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/tagged.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/switchboard.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/switchboard.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/switchboard.py --- ../python3/nltk/corpus/reader/switchboard.py (original) +++ ../python3/nltk/corpus/reader/switchboard.py (refactored) @@ -4,7 +4,7 @@ # Author: Edward Loper # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + import re from nltk.tag import str2tuple, map_tag + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/string_category.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/string_category.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/string_category.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/sinica_treebank.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/sinica_treebank.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/sinica_treebank.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/sentiwordnet.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/sentiwordnet.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/sentiwordnet.py --- ../python3/nltk/corpus/reader/sentiwordnet.py (original) +++ ../python3/nltk/corpus/reader/sentiwordnet.py (refactored) @@ -54,7 +54,7 @@ def _parse_src_file(self): lines = self.open(self._fileids[0]).read().splitlines() - lines = filter((lambda x : not re.search(r"^\s*#", x)), lines) + lines = list(filter((lambda x : not re.search(r"^\s*#", x)), lines)) for i, line in enumerate(lines): fields = [field.strip() for field in re.split(r"\t+", line)] try: @@ -88,12 +88,12 @@ synset_list = wn.synsets(string, pos) for synset in synset_list: sentis.append(self.senti_synset(synset.name())) - sentis = filter(lambda x : x, sentis) + sentis = [x for x in sentis if x] return sentis def all_senti_synsets(self): from nltk.corpus import wordnet as wn - for key, fields in self._db.items(): + for key, fields in list(self._db.items()): pos, offset = key pos_score, neg_score = fields synset = wn._synset_from_pos_and_offset(pos, offset) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/senseval.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/senseval.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/senseval.py --- ../python3/nltk/corpus/reader/senseval.py (original) +++ ../python3/nltk/corpus/reader/senseval.py (refactored) @@ -21,7 +21,7 @@ Each instance of the ambiguous words "hard", "interest", "line", and "serve" is tagged with a sense identifier, and supplied with context. """ -from __future__ import print_function, unicode_literals + import re from xml.etree import ElementTree + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/semcor.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/semcor.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/semcor.py --- ../python3/nltk/corpus/reader/semcor.py (original) +++ ../python3/nltk/corpus/reader/semcor.py (refactored) @@ -8,7 +8,7 @@ """ Corpus reader for the SemCor Corpus. """ -from __future__ import absolute_import, unicode_literals + __docformat__ = 'epytext en' from nltk.corpus.reader.api import * @@ -153,7 +153,7 @@ # the "rdf" attribute holds its inflected form and "lemma" holds its lemma. # For NEs, "rdf", "lemma", and "pn" all hold the same value (the NE class). sensenum = xmlword.get('wnsn') # WordNet sense number - isOOVEntity = 'pn' in xmlword.keys() # a "personal name" (NE) not in WordNet + isOOVEntity = 'pn' in list(xmlword.keys()) # a "personal name" (NE) not in WordNet pos = xmlword.get('pos') # part of speech for the whole chunk (None for punctuation) if unit=='token': + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/rte.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/rte.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/rte.py --- ../python3/nltk/corpus/reader/rte.py (original) +++ ../python3/nltk/corpus/reader/rte.py (refactored) @@ -32,7 +32,7 @@ file, taking values 1, 2 or 3. The GID is formatted 'm-n', where 'm' is the challenge number and 'n' is the pair ID. """ -from __future__ import unicode_literals + from nltk import compat from nltk.corpus.reader.util import * from nltk.corpus.reader.api import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/propbank.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/propbank.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/propbank.py --- ../python3/nltk/corpus/reader/propbank.py (original) +++ ../python3/nltk/corpus/reader/propbank.py (refactored) @@ -5,7 +5,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + import re from xml.etree import ElementTree + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/ppattach.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/ppattach.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/ppattach.py --- ../python3/nltk/corpus/reader/ppattach.py (original) +++ ../python3/nltk/corpus/reader/ppattach.py (refactored) @@ -37,7 +37,7 @@ The PP Attachment Corpus is distributed with NLTK with the permission of the author. """ -from __future__ import unicode_literals + from nltk import compat from nltk.corpus.reader.util import * + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/plaintext.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/plaintext.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/plaintext.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/pl196x.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/pl196x.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/pl196x.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/nps_chat.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/nps_chat.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/nps_chat.py --- ../python3/nltk/corpus/reader/nps_chat.py (original) +++ ../python3/nltk/corpus/reader/nps_chat.py (refactored) @@ -4,7 +4,7 @@ # Author: Edward Loper # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + import re import textwrap + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/nombank.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/nombank.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/nombank.py --- ../python3/nltk/corpus/reader/nombank.py (original) +++ ../python3/nltk/corpus/reader/nombank.py (refactored) @@ -6,7 +6,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + from nltk.tree import Tree from xml.etree import ElementTree + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/lin.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/lin.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/lin.py --- ../python3/nltk/corpus/reader/lin.py (original) +++ ../python3/nltk/corpus/reader/lin.py (refactored) @@ -4,7 +4,7 @@ # Author: Dan Blanchard # URL: # For license information, see LICENSE.txt -from __future__ import print_function + import re from collections import defaultdict @@ -96,9 +96,9 @@ scores and synonyms. ''' if fileid: - return self._thesaurus[fileid][ngram].items() + return list(self._thesaurus[fileid][ngram].items()) else: - return [(fileid, self._thesaurus[fileid][ngram].items()) for fileid in self._fileids] + return [(fileid, list(self._thesaurus[fileid][ngram].items())) for fileid in self._fileids] def synonyms(self, ngram, fileid=None): ''' @@ -112,9 +112,9 @@ lists, where inner lists contain synonyms. ''' if fileid: - return self._thesaurus[fileid][ngram].keys() + return list(self._thesaurus[fileid][ngram].keys()) else: - return [(fileid, self._thesaurus[fileid][ngram].keys()) for fileid in self._fileids] + return [(fileid, list(self._thesaurus[fileid][ngram].keys())) for fileid in self._fileids] def __contains__(self, ngram): ''' + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/knbc.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/knbc.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/knbc.py --- ../python3/nltk/corpus/reader/knbc.py (original) +++ ../python3/nltk/corpus/reader/knbc.py (refactored) @@ -6,7 +6,7 @@ # For license information, see LICENSE.TXT # For more information, see http://lilyx.net/pages/nltkjapanesecorpus.html -from __future__ import print_function + import sys + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/ipipan.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/ipipan.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/ipipan.py --- ../python3/nltk/corpus/reader/ipipan.py (original) +++ ../python3/nltk/corpus/reader/ipipan.py (refactored) @@ -164,7 +164,7 @@ values_list = self._get_tag(fp, tag) for value in values_list: if map is not None: - value = map(value) + value = list(map(value)) if value in values: ret_fileids.add(f) return list(ret_fileids) @@ -198,7 +198,7 @@ replace_xmlentities = kwargs.pop('replace_xmlentities', True) if len(kwargs) > 0: - raise ValueError('Unexpected arguments: %s' % kwargs.keys()) + raise ValueError('Unexpected arguments: %s' % list(kwargs.keys())) if not one_tag and not disamb_only: raise ValueError('You cannot specify both one_tag=False and ' 'disamb_only=False') + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/indian.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/indian.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/indian.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/ieer.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/ieer.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/ieer.py --- ../python3/nltk/corpus/reader/ieer.py (original) +++ ../python3/nltk/corpus/reader/ieer.py (refactored) @@ -20,7 +20,7 @@ The corpus contains the following files: APW_19980314, APW_19980424, APW_19980429, NYT_19980315, NYT_19980403, and NYT_19980407. """ -from __future__ import unicode_literals + import nltk from nltk import compat + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/framenet.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/framenet.py --- ../python3/nltk/corpus/reader/framenet.py (original) +++ ../python3/nltk/corpus/reader/framenet.py (refactored) @@ -5,7 +5,7 @@ # Nathan Schneider # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + """ Corpus reader for the Framenet 1.5 Corpus. @@ -22,6 +22,7 @@ from nltk.corpus.reader import XMLCorpusReader, XMLCorpusView from nltk.compat import text_type, string_types, python_2_unicode_compatible from nltk.util import AbstractLazySequence, LazyMap +import collections def _pretty_longstring(defstr, prefix='', wrap_at=65): @@ -73,7 +74,7 @@ :rtype: str """ - semkeys = st.keys() + semkeys = list(st.keys()) if len(semkeys) == 1: return "" outstr = "" @@ -142,7 +143,7 @@ :rtype: str """ - lukeys = lu.keys() + lukeys = list(lu.keys()) outstr = "" outstr += "lexical unit ({0.ID}): {0.name}\n\n".format(lu) if 'definition' in lukeys: @@ -181,7 +182,7 @@ :return: A nicely formated string representation of the frame element. :rtype: str """ - fekeys = fe.keys() + fekeys = list(fe.keys()) outstr = "" outstr += "frame element ({0.ID}): {0.name}\n of {1.name}({1.ID})\n".format(fe, fe.frame) if 'definition' in fekeys: @@ -249,7 +250,7 @@ except KeyError: fes[fe.coreType] = [] fes[fe.coreType].append("{0} ({1})".format(feName, fe.ID)) - for ct in sorted(fes.keys(), key=lambda ct2: ['Core','Core-Unexpressed','Peripheral','Extra-Thematic'].index(ct2)): + for ct in sorted(list(fes.keys()), key=lambda ct2: ['Core','Core-Unexpressed','Peripheral','Extra-Thematic'].index(ct2)): outstr += "{0:>16}: {1}\n".format(ct, ', '.join(sorted(fes[ct]))) outstr += "\n[FEcoreSets] {0} frame element core sets\n".format(len(frame.FEcoreSets)) @@ -354,12 +355,12 @@ self._loader = loader self._d = None def _data(self): - if callable(self._loader): + if isinstance(self._loader, collections.Callable): self._d = self._loader() self._loader = None # the data is now cached return self._d - def __nonzero__(self): + def __bool__(self): return bool(self._data()) def __len__(self): return len(self._data()) @@ -722,7 +723,7 @@ # INFERENCE RULE: propagate lexical semtypes from the frame to all its LUs for st in fentry.semTypes: if st.rootType.name=='Lexical_type': - for lu in fentry.lexUnit.values(): + for lu in list(fentry.lexUnit.values()): if st not in lu.semTypes: lu.semTypes.append(st) @@ -965,7 +966,7 @@ f = self.frame_by_id(luinfo.frameID) luinfo = self._lu_idx[fn_luid] if ignorekeys: - return AttrDict(dict((k, v) for k, v in luinfo.items() if k not in ignorekeys)) + return AttrDict(dict((k, v) for k, v in list(luinfo.items()) if k not in ignorekeys)) return luinfo @@ -1185,7 +1186,7 @@ fIDs = list(self._frame_idx.keys()) if name is not None: - return PrettyList(self.frame(fID) for fID,finfo in self.frame_ids_and_names(name).items()) + return PrettyList(self.frame(fID) for fID,finfo in list(self.frame_ids_and_names(name).items())) else: return PrettyLazyMap(self.frame, fIDs) @@ -1196,7 +1197,7 @@ """ if not self._frame_idx: self._buildframeindex() - return dict((fID, finfo.name) for fID,finfo in self._frame_idx.items() if name is None or re.search(name, finfo.name) is not None) + return dict((fID, finfo.name) for fID,finfo in list(self._frame_idx.items()) if name is None or re.search(name, finfo.name) is not None) def lus(self, name=None): """ @@ -1305,7 +1306,7 @@ luIDs = list(self._lu_idx.keys()) if name is not None: - return PrettyLRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/framenet.py ist(self.lu(luID) for luID,luName in self.lu_ids_and_names(name).items()) + return PrettyList(self.lu(luID) for luID,luName in list(self.lu_ids_and_names(name).items())) else: return PrettyLazyMap(self.lu, luIDs) @@ -1316,7 +1317,7 @@ """ if not self._lu_idx: self._buildluindex() - return dict((luID, luinfo.name) for luID,luinfo in self._lu_idx.items() if name is None or re.search(name, luinfo.name) is not None) + return dict((luID, luinfo.name) for luID,luinfo in list(self._lu_idx.items()) if name is None or re.search(name, luinfo.name) is not None) def documents(self, name=None): """ @@ -1353,10 +1354,10 @@ - 'filename' """ try: - ftlist = PrettyList(self._fulltext_idx.values()) + ftlist = PrettyList(list(self._fulltext_idx.values())) except AttributeError: self._buildcorpusindex() - ftlist = PrettyList(self._fulltext_idx.values()) + ftlist = PrettyList(list(self._fulltext_idx.values())) if name is None: return ftlist @@ -1386,7 +1387,7 @@ """ if not self._freltyp_idx: self._buildrelationindex() - return self._freltyp_idx.values() + return list(self._freltyp_idx.values()) def frame_relations(self, frame=None, frame2=None, type=None): """ @@ -1453,7 +1454,7 @@ # lookup by 'type' rels = type.frameRelations else: - rels = self._frel_idx.values() + rels = list(self._frel_idx.values()) # filter by 'frame2' if frame2 is not None: @@ -1500,7 +1501,7 @@ """ if not self._ferel_idx: self._buildrelationindex() - return PrettyList(sorted(self._ferel_idx.values(), + return PrettyList(sorted(list(self._ferel_idx.values()), key=lambda ferel: (ferel.type.ID, ferel.frameRelation.superFrameName, ferel.superFEName, ferel.frameRelation.subFrameName, ferel.subFEName))) @@ -1675,7 +1676,7 @@ frinfo['frameRelations'] = self.frame_relations(frame=frinfo) # resolve 'requires' and 'excludes' links between FEs of this frame - for fe in frinfo.FE.values(): + for fe in list(frinfo.FE.values()): if fe.requiresFE: name, ID = fe.requiresFE.name, fe.requiresFE.ID fe.requiresFE = frinfo.FE[name] @@ -2004,7 +2005,7 @@ # print( '\nThe "core" Frame Elements in the "{0}" frame:'.format(m_frame.name)) - print(' ', [x.name for x in m_frame.FE.values() if x.coreType == "Core"]) + print(' ', [x.name for x in list(m_frame.FE.values()) if x.coreType == "Core"]) # # get all of the Lexical Units that are incorporated in the @@ -2012,7 +2013,7 @@ # print('\nAll Lexical Units that are incorporated in the "Ailment" FE:') m_frame = fn.frame(239) - ailment_lus = [x for x in m_frame.lexUnit.values() if 'incorporatedFE' in x and x.incorporatedFE == 'Ailment'] + ailment_lus = [x for x in list(m_frame.lexUnit.values()) if 'incorporatedFE' in x and x.incorporatedFE == 'Ailment'] print(' ', [x.name for x in ailment_lus]) # @@ -2020,7 +2021,7 @@ # print('\nNumber of Lexical Units in the "{0}" frame:'.format(m_frame.name), len(m_frame.lexUnit)) - print(' ', [x.name for x in m_frame.lexUnit.values()][:5], '...') + print(' ', [x.name for x in list(m_frame.lexUnit.values())][:5], '...') # # get basic info on the second LU in the frame + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/dependency.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/dependency.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/dependency.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/conll.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/conll.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/conll.py --- ../python3/nltk/corpus/reader/conll.py (original) +++ ../python3/nltk/corpus/reader/conll.py (refactored) @@ -10,7 +10,7 @@ Read CoNLL-style chunk fileids. """ -from __future__ import unicode_literals + import os import codecs + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/cmudict.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/cmudict.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/cmudict.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/chunked.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/chunked.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/chunked.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/childes.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/childes.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/childes.py --- ../python3/nltk/corpus/reader/childes.py (original) +++ ../python3/nltk/corpus/reader/childes.py (refactored) @@ -9,7 +9,7 @@ """ Corpus reader for the XML version of the CHILDES corpus. """ -from __future__ import print_function + __docformat__ = 'epytext en' @@ -148,7 +148,7 @@ def _get_corpus(self, fileid): results = dict() xmldoc = ElementTree.parse(fileid).getroot() - for key, value in xmldoc.items(): + for key, value in list(xmldoc.items()): results[key] = value return results @@ -171,7 +171,7 @@ pat = dictOfDicts() for participant in xmldoc.findall('.//{%s}Participants/{%s}participant' % (NS,NS)): - for (key,value) in participant.items(): + for (key,value) in list(participant.items()): pat[participant.get('id')][key] = value return pat @@ -443,7 +443,7 @@ for file in childes.fileids()[:5]: corpus = '' corpus_id = '' - for (key,value) in childes.corpus(file)[0].items(): + for (key,value) in list(childes.corpus(file)[0].items()): if key == "Corpus": corpus = value if key == "Id": corpus_id = value print('Reading', corpus,corpus_id,' .....') @@ -455,8 +455,8 @@ print("stemmed words:", childes.words(file, stem=True)[:7]," ...") print("words with relations and pos-tag:", childes.words(file, relation=True)[:5]," ...") print("sentence:", childes.sents(file)[:2]," ...") - for (participant, values) in childes.participants(file)[0].items(): - for (key, value) in values.items(): + for (participant, values) in list(childes.participants(file)[0].items()): + for (key, value) in list(values.items()): print("\tparticipant", participant, key, ":", value) print("num of sent:", len(childes.sents(file))) print("num of morphemes:", len(childes.words(file, stem=True))) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/chasen.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/chasen.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/chasen.py --- ../python3/nltk/corpus/reader/chasen.py (original) +++ ../python3/nltk/corpus/reader/chasen.py (refactored) @@ -5,7 +5,7 @@ # For license information, see LICENSE.TXT # For more information, see http://lilyx.net/pages/nltkjapanesecorpus.html -from __future__ import print_function + import sys + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/bracket_parse.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/bracket_parse.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/bracket_parse.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/bnc.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/bnc.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/bnc.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/corpus/reader/api.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/corpus/reader/api.py --- ../python3/nltk/corpus/reader/api.py (original) +++ ../python3/nltk/corpus/reader/api.py (refactored) @@ -9,7 +9,7 @@ """ API for corpus readers. """ -from __future__ import unicode_literals + import os import re @@ -299,7 +299,7 @@ self._add(file_id, category) elif self._map is not None: - for (file_id, categories) in self._map.items(): + for (file_id, categories) in list(self._map.items()): for category in categories: self._add(file_id, category) @@ -420,14 +420,14 @@ return sum(self._read_tagged_sent_block(stream, tagset), []) def _read_sent_block(self, stream): - return list(filter(None, [self._word(t) for t in self._read_block(stream)])) + return list([_f for _f in [self._word(t) for t in self._read_block(stream)] if _f]) def _read_tagged_sent_block(self, stream, tagset=None): - return list(filter(None, [self._tag(t, tagset) - for t in self._read_block(stream)])) + return list([_f for _f in [self._tag(t, tagset) + for t in self._read_block(stream)] if _f]) def _read_parsed_sent_block(self, stream): - return list(filter(None, [self._parse(t) for t in self._read_block(stream)])) + return list([_f for _f in [self._parse(t) for t in self._read_block(stream)] if _f]) #} End of Block Readers #------------------------------------------------------------ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/aligned.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/aligned.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/aligned.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/reader/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/reader/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/reader/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/europarl_raw.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/europarl_raw.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/europarl_raw.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/corpus/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/corpus/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/corpus/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/compat.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/compat.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/compat.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ../python3/nltk/compat.py ### RefactoringTool: Line 179: You should use 'operator.mul(None, count)' here. --- ../python3/nltk/compat.py (original) +++ ../python3/nltk/compat.py (refactored) @@ -6,7 +6,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import absolute_import, print_function + import sys import types from functools import wraps @@ -58,14 +58,14 @@ def b(s): return s def u(s): - return unicode(s, "unicode_escape") - - string_types = basestring, - integer_types = (int, long) - class_types = (type, types.ClassType) - text_type = unicode + return str(s, "unicode_escape") + + string_types = str, + integer_types = (int, int) + class_types = (type, type) + text_type = str binary_type = str - get_im_class = lambda meth: meth.im_class + get_im_class = lambda meth: meth.__self__.__class__ xrange = xrange _iterkeys = "iterkeys" _itervalues = "itervalues" @@ -73,20 +73,21 @@ reload = reload raw_input = raw_input - from itertools import imap, izip + try: - from cStringIO import StringIO + from io import StringIO except ImportError: - from StringIO import StringIO + from io import StringIO BytesIO = StringIO - import htmlentitydefs + import html.entities from urllib2 import (urlopen, HTTPError, URLError, ProxyHandler, build_opener, install_opener, HTTPPasswordMgrWithDefaultRealm, ProxyBasicAuthHandler, ProxyDigestAuthHandler, Request) - from urllib import getproxies, quote_plus, unquote_plus, urlencode, url2pathname + from urllib.parse import quote_plus, unquote_plus, urlencode + from urllib.request import url2pathname # Maps py2 tkinter package structure to py3 using import hook (PEP 302) class TkinterPackage(object): @@ -124,7 +125,7 @@ if PY26: from operator import itemgetter from heapq import nlargest - from itertools import repeat, ifilter + from itertools import repeat class Counter(dict): '''Dict subclass for counting hashable objects. Sometimes called a bag @@ -161,8 +162,8 @@ ''' if n is None: - return sorted(self.iteritems(), key=itemgetter(1), reverse=True) - return nlargest(n, self.iteritems(), key=itemgetter(1)) + return sorted(iter(self.items()), key=itemgetter(1), reverse=True) + return nlargest(n, iter(self.items()), key=itemgetter(1)) def elements(self): '''Iterator over elements repeating each as many times as its count. @@ -175,7 +176,7 @@ elements() will ignore it. ''' - for elem, count in self.iteritems(): + for elem, count in self.items(): for _ in repeat(None, count): yield elem @@ -203,7 +204,7 @@ if hasattr(iterable, 'iteritems'): if self: self_get = self.get - for elem, count in iterable.iteritems(): + for elem, count in iterable.items(): self[elem] = self_get(elem, 0) + count else: dict.update(self, iterable) # fast path when counter is empty @@ -301,7 +302,7 @@ result = Counter() if len(self) < len(other): self, other = other, self - for elem in ifilter(self.__contains__, other): + for elem in filter(self.__contains__, other): newcount = _min(self[elem], other[elem]) if newcount > 0: result[elem] = newcount @@ -464,7 +465,7 @@ if hasattr(obj, 'unicode_repr'): return obj.unicode_repr() - if isinstance(obj, unicode): + if isinstance(obj, str): return repr(obj)[1:] # strip "u" letter from output return repr(obj) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/collocations.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/collocations.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/collocations.py --- ../python3/nltk/collocations.py (original) +++ ../python3/nltk/collocations.py (refactored) @@ -23,7 +23,7 @@ ngram given appropriate frequency counts. A number of standard association measures are provided in bigram_measures and trigram_measures. """ -from __future__ import print_function + # Possible TODOs: # - consider the distinction between f(x,_) and f(x) and whether our + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/cluster/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/cluster/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/cluster/util.py --- ../python3/nltk/cluster/util.py (original) +++ ../python3/nltk/cluster/util.py (refactored) @@ -4,7 +4,7 @@ # Author: Trevor Cohn # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + import copy from sys import stdout @@ -235,7 +235,7 @@ last_row = ["%s" % leaf._value for leaf in leaves] # find the bottom row and the best cell width - width = max(map(len, last_row)) + 1 + width = max(list(map(len, last_row))) + 1 lhalf = width / 2 rhalf = width - lhalf - 1 @@ -250,7 +250,7 @@ verticals = [ format(' ') for leaf in leaves ] while queue: priority, node = queue.pop() - child_left_leaf = list(map(lambda c: c.leaves(False)[0], node._children)) + child_left_leaf = list([c.leaves(False)[0] for c in node._children]) indices = list(map(leaves.index, child_left_leaf)) if child_left_leaf: min_idx = min(indices) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/cluster/kmeans.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/cluster/kmeans.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/cluster/kmeans.py --- ../python3/nltk/cluster/kmeans.py (original) +++ ../python3/nltk/cluster/kmeans.py (refactored) @@ -4,7 +4,7 @@ # Author: Trevor Cohn # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + import copy import random + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/cluster/gaac.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/cluster/gaac.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/cluster/gaac.py --- ../python3/nltk/cluster/gaac.py (original) +++ ../python3/nltk/cluster/gaac.py (refactored) @@ -4,7 +4,7 @@ # Author: Trevor Cohn # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + try: import numpy + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/cluster/em.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/cluster/em.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/cluster/em.py --- ../python3/nltk/cluster/em.py (original) +++ ../python3/nltk/cluster/em.py (refactored) @@ -4,7 +4,7 @@ # Author: Trevor Cohn # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + try: import numpy except ImportError: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/cluster/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/cluster/api.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/cluster/api.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/cluster/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/cluster/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/cluster/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/weka.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/weka.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/weka.py --- ../python3/nltk/classify/weka.py (original) +++ ../python3/nltk/classify/weka.py (refactored) @@ -8,7 +8,7 @@ """ Classifiers that make use of the external 'Weka' package. """ -from __future__ import print_function + import time import tempfile import os @@ -123,7 +123,7 @@ def parse_weka_distribution(self, s): probs = [float(v) for v in re.split('[*,]+', s) if v.strip()] - probs = dict(zip(self._formatter.labels(), probs)) + probs = dict(list(zip(self._formatter.labels(), probs))) return DictionaryProbDist(probs) def parse_weka_output(self, lines): @@ -191,7 +191,7 @@ if classifier in cls._CLASSIFIER_CLASS: javaclass = cls._CLASSIFIER_CLASS[classifier] - elif classifier in cls._CLASSIFIER_CLASS.values(): + elif classifier in list(cls._CLASSIFIER_CLASS.values()): javaclass = classifier else: raise ValueError('Unknown classifier %s' % classifier) @@ -260,7 +260,7 @@ # Determine the types of all features. features = {} for tok, label in tokens: - for (fname, fval) in tok.items(): + for (fname, fval) in list(tok.items()): if issubclass(type(fval), bool): ftype = '{True, False}' elif issubclass(type(fval), (compat.integer_types, float, bool)): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/util.py --- ../python3/nltk/classify/util.py (original) +++ ../python3/nltk/classify/util.py (refactored) @@ -9,7 +9,7 @@ """ Utility functions and classes for classifiers. """ -from __future__ import print_function + import math @@ -222,10 +222,10 @@ random.shuffle(female_names) # Create a list of male names to be used as positive-labeled examples for training - positive = map(features, male_names[:2000]) + positive = list(map(features, male_names[:2000])) # Create a list of male and female names to be used as unlabeled examples - unlabeled = map(features, male_names[2000:2500] + female_names[:500]) + unlabeled = list(map(features, male_names[2000:2500] + female_names[:500])) # Create a test set with correctly-labeled male and female names test = [(name, True) for name in male_names[2500:2750]] \ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/tadm.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/tadm.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/tadm.py --- ../python3/nltk/classify/tadm.py (original) +++ ../python3/nltk/classify/tadm.py (refactored) @@ -4,7 +4,7 @@ # Author: Joseph Frazee # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + import sys import subprocess + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/svm.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/classify/svm.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/classify/svm.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/scikitlearn.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/scikitlearn.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/scikitlearn.py --- ../python3/nltk/classify/scikitlearn.py (original) +++ ../python3/nltk/classify/scikitlearn.py (refactored) @@ -30,7 +30,7 @@ ... ('nb', MultinomialNB())]) >>> classif = SklearnClassifier(pipeline) """ -from __future__ import print_function, unicode_literals + from nltk.classify.api import ClassifierI from nltk.probability import DictionaryProbDist + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/rte_classify.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/rte_classify.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/rte_classify.py --- ../python3/nltk/classify/rte_classify.py (original) +++ ../python3/nltk/classify/rte_classify.py (refactored) @@ -16,7 +16,7 @@ TO DO: better Named Entity classification TO DO: add lemmatization """ -from __future__ import print_function + import nltk from nltk.classify.util import accuracy + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/positivenaivebayes.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/positivenaivebayes.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/positivenaivebayes.py --- ../python3/nltk/classify/positivenaivebayes.py (original) +++ ../python3/nltk/classify/positivenaivebayes.py (refactored) @@ -105,14 +105,14 @@ # Count up how many times each feature value occurred in positive examples. for featureset in positive_featuresets: - for fname, fval in featureset.items(): + for fname, fval in list(featureset.items()): positive_feature_freqdist[fname][fval] += 1 feature_values[fname].add(fval) fnames.add(fname) # Count up how many times each feature value occurred in unlabeled examples. for featureset in unlabeled_featuresets: - for fname, fval in featureset.items(): + for fname, fval in list(featureset.items()): unlabeled_feature_freqdist[fname][fval] += 1 feature_values[fname].add(fval) fnames.add(fname) @@ -139,11 +139,11 @@ # Create the P(fval|label, fname) distribution. feature_probdist = {} - for fname, freqdist in positive_feature_freqdist.items(): + for fname, freqdist in list(positive_feature_freqdist.items()): probdist = estimator(freqdist, bins=len(feature_values[fname])) feature_probdist[True, fname] = probdist - for fname, freqdist in unlabeled_feature_freqdist.items(): + for fname, freqdist in list(unlabeled_feature_freqdist.items()): global_probdist = estimator(freqdist, bins=len(feature_values[fname])) negative_feature_probs = {} for fval in feature_values[fname]: + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/naivebayes.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/naivebayes.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/naivebayes.py --- ../python3/nltk/classify/naivebayes.py (original) +++ ../python3/nltk/classify/naivebayes.py (refactored) @@ -29,7 +29,7 @@ | P(label|features) = -------------------------------------------- | SUM[l]( P(l) * P(f1|l) * ... * P(fn|l) ) """ -from __future__ import print_function, unicode_literals + from collections import defaultdict @@ -108,7 +108,7 @@ # Then add in the log probability of features given labels. for label in self._labels: - for (fname, fval) in featureset.items(): + for (fname, fval) in list(featureset.items()): if (label, fname) in self._feature_probdist: feature_probs = self._feature_probdist[label,fname] logprob[label] += feature_probs.logprob(fval) @@ -160,7 +160,7 @@ maxprob = defaultdict(lambda: 0.0) minprob = defaultdict(lambda: 1.0) - for (label, fname), probdist in self._feature_probdist.items(): + for (label, fname), probdist in list(self._feature_probdist.items()): for fval in probdist.samples(): feature = (fname, fval) features.add( feature ) @@ -191,7 +191,7 @@ # the label and featurename. for featureset, label in labeled_featuresets: label_freqdist[label] += 1 - for fname, fval in featureset.items(): + for fname, fval in list(featureset.items()): # Increment freq(fval|label, fname) feature_freqdist[label, fname][fval] += 1 # Record that fname can take the value fval. @@ -219,7 +219,7 @@ # Create the P(fval|label, fname) distribution feature_probdist = {} - for ((label, fname), freqdist) in feature_freqdist.items(): + for ((label, fname), freqdist) in list(feature_freqdist.items()): probdist = estimator(freqdist, bins=len(feature_values[fname])) feature_probdist[label,fname] = probdist + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/megam.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/megam.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/megam.py --- ../python3/nltk/classify/megam.py (original) +++ ../python3/nltk/classify/megam.py (refactored) @@ -22,7 +22,7 @@ .. _megam: http://www.umiacs.umd.edu/~hal/megam/index.html """ -from __future__ import print_function + import os import os.path + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/maxent.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/maxent.py --- ../python3/nltk/classify/maxent.py (original) +++ ../python3/nltk/classify/maxent.py (refactored) @@ -51,7 +51,7 @@ performed by classes that implement the ``MaxentFeatureEncodingI`` interface. """ -from __future__ import print_function, unicode_literals + __docformat__ = 'epytext en' try: @@ -522,7 +522,7 @@ encoding = [] # Convert input-features to joint-features: - for fname, fval in featureset.items(): + for fname, fval in list(featureset.items()): # Known feature name & value: if (fname, fval, label) in self._mapping: encoding.append((self._mapping[fname, fval, label], 1)) @@ -552,17 +552,17 @@ self._inv_mapping except AttributeError: self._inv_mapping = [-1]*len(self._mapping) - for (info, i) in self._mapping.items(): + for (info, i) in list(self._mapping.items()): self._inv_mapping[i] = info if f_id < len(self._mapping): (fname, fval, label) = self._inv_mapping[f_id] return '%s==%r and label is %r' % (fname, fval, label) - elif self._alwayson and f_id in self._alwayson.values(): - for (label, f_id2) in self._alwayson.items(): + elif self._alwayson and f_id in list(self._alwayson.values()): + for (label, f_id2) in list(self._alwayson.items()): if f_id==f_id2: return 'label is %r' % label - elif self._unseen and f_id in self._unseen.values(): - for (fname, f_id2) in self._unseen.items(): + elif self._unseen and f_id in list(self._unseen.values()): + for (fname, f_id2) in list(self._unseen.items()): if f_id==f_id2: return '%s is unseen' % fname else: raise ValueError('Bad feature id') @@ -613,7 +613,7 @@ seen_labels.add(label) # Record each of the features. - for (fname, fval) in tok.items(): + for (fname, fval) in list(tok.items()): # If a count cutoff is given, then only add a joint # feature once the corresponding (fname, fval, label) @@ -695,7 +695,7 @@ def encode(self, featureset, label): encoding = [] - for feature, value in featureset.items(): + for feature, value in list(featureset.items()): if (feature, label) not in self._mapping: self._mapping[(feature, label)] = len(self._mapping) if value not in self._label_mapping: @@ -849,7 +849,7 @@ encoding = [] # Convert input-features to joint-features: - for fname, fval in featureset.items(): + for fname, fval in list(featureset.items()): if isinstance(fval, (compat.integer_types, float)): # Known feature name & value: if (fname, type(fval), label) in self._mapping: @@ -885,17 +885,17 @@ self._inv_mapping except AttributeError: self._inv_mapping = [-1]*len(self._mapping) - for (info, i) in self._mapping.items(): + for (info, i) in list(self._mapping.items()): self._inv_mapping[i] = info if f_id < len(self._mapping): (fname, fval, label) = self._inv_mapping[f_id] return '%s==%r and label is %r' % (fname, fval, label) - elif self._alwayson and f_id in self._alwayson.values(): - for (label, f_id2) in self._alwayson.items(): + elif self._alwayson and f_id in list(self._alwayson.values()): + for (label, f_id2) in list(self._alwayson.items()): if f_id==f_id2: return 'label is %r' % label - elif self._unseen and f_id in self._unseen.values(): - for (fname, f_id2) in self._unseen.items(): + elif self._unseen and f_id in list(self._unseen.values()): + for (fname, f_id2) in list(self._unseen.items()): if f_id==f_id2: return '%s is unseen' % fname else: raise ValueError('Bad feature iRefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/maxent.py d') @@ -949,7 +949,7 @@ seen_labels.add(label) # Record each of the features. - for (fname, fval) in tok.items(): + for (fname, fval) in list(tok.items()): if(type(fval) in (int, float)): fval = type(fval) # If a count cutoff is given, then only add a joint # feature once the corresponding (fname, fval, label) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/decisiontree.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/classify/decisiontree.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/classify/decisiontree.py --- ../python3/nltk/classify/decisiontree.py (original) +++ ../python3/nltk/classify/decisiontree.py (refactored) @@ -10,7 +10,7 @@ the basis of a tree structure, where branches correspond to conditions on feature values, and leaves correspond to label assignments. """ -from __future__ import print_function, unicode_literals + from collections import defaultdict @@ -44,7 +44,7 @@ def labels(self): labels = [self._label] if self._decisions is not None: - for dt in self._decisions.values(): + for dt in list(self._decisions.values()): labels.extend(dt.labels()) if self._default is not None: labels.extend(self._default.labels()) @@ -145,7 +145,7 @@ if feature_values is None and binary: feature_values = defaultdict(set) for featureset, label in labeled_featuresets: - for fname, fval in featureset.items(): + for fname, fval in list(featureset.items()): feature_values[fname].add(fval) # Start with a stump. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/classify/api.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/classify/api.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/classify/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/classify/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/classify/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chunk/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chunk/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chunk/util.py --- ../python3/nltk/chunk/util.py (original) +++ ../python3/nltk/chunk/util.py (refactored) @@ -5,7 +5,7 @@ # Steven Bird (minor additions) # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + import re + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chunk/regexp.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chunk/regexp.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chunk/regexp.py --- ../python3/nltk/chunk/regexp.py (original) +++ ../python3/nltk/chunk/regexp.py (refactored) @@ -5,8 +5,8 @@ # Steven Bird (minor additions) # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals -from __future__ import division + + import re + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chunk/named_entity.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chunk/named_entity.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chunk/named_entity.py --- ../python3/nltk/chunk/named_entity.py (original) +++ ../python3/nltk/chunk/named_entity.py (refactored) @@ -8,7 +8,7 @@ """ Named entity chunker """ -from __future__ import print_function + import os, re, pickle from xml.etree import ElementTree as ET + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chunk/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/chunk/api.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/chunk/api.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chunk/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/chunk/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/chunk/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chat/zen.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chat/zen.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chat/zen.py --- ../python3/nltk/chat/zen.py (original) +++ ../python3/nltk/chat/zen.py (refactored) @@ -35,7 +35,7 @@ respond to a question by asking a different question, in much the same way as Eliza. """ -from __future__ import print_function + from nltk.chat.util import Chat, reflections + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chat/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chat/util.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chat/util.py --- ../python3/nltk/chat/util.py (original) +++ ../python3/nltk/chat/util.py (refactored) @@ -7,7 +7,7 @@ # Based on an Eliza implementation by Joe Strout , # Jeff Epler and Jez Higgins . -from __future__ import print_function + import re import random @@ -55,7 +55,7 @@ def _compile_reflections(self): - sorted_refl = sorted(self._reflections.keys(), key=len, + sorted_refl = sorted(list(self._reflections.keys()), key=len, reverse=True) return re.compile(r"\b({0})\b".format("|".join(map(re.escape, sorted_refl))), re.IGNORECASE) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chat/suntsu.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chat/suntsu.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chat/suntsu.py --- ../python3/nltk/chat/suntsu.py (original) +++ ../python3/nltk/chat/suntsu.py (refactored) @@ -13,7 +13,7 @@ Hosted by the Gutenberg Project http://www.gutenberg.org/ """ -from __future__ import print_function + from nltk.chat.util import Chat, reflections + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chat/rude.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chat/rude.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chat/rude.py --- ../python3/nltk/chat/rude.py (original) +++ ../python3/nltk/chat/rude.py (refactored) @@ -4,7 +4,7 @@ # Author: Peter Spiller # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + from nltk.chat.util import Chat, reflections + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chat/iesha.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chat/iesha.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chat/iesha.py --- ../python3/nltk/chat/iesha.py (original) +++ ../python3/nltk/chat/iesha.py (refactored) @@ -10,7 +10,7 @@ anime junky that frequents YahooMessenger or MSNM. All spelling mistakes and flawed grammar are intentional. """ -from __future__ import print_function + from nltk.chat.util import Chat + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chat/eliza.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chat/eliza.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chat/eliza.py --- ../python3/nltk/chat/eliza.py (original) +++ ../python3/nltk/chat/eliza.py (refactored) @@ -12,7 +12,7 @@ # a translation table used to convert things you say into things the # computer says back, e.g. "I am" --> "you are" -from __future__ import print_function + from nltk.chat.util import Chat, reflections # a table of response pairs, where each pair consists of a + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/chat/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/chat/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/chat/__init__.py --- ../python3/nltk/chat/__init__.py (original) +++ ../python3/nltk/chat/__init__.py (refactored) @@ -15,7 +15,7 @@ These chatbots may not work using the windows command line or the windows IDLE GUI. """ -from __future__ import print_function + from nltk.chat.util import Chat from nltk.chat.eliza import eliza_chat + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/ccg/lexicon.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/ccg/lexicon.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/ccg/lexicon.py --- ../python3/nltk/ccg/lexicon.py (original) +++ ../python3/nltk/ccg/lexicon.py (refactored) @@ -4,7 +4,7 @@ # Author: Graeme Gange # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + import re from collections import defaultdict + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/ccg/combinator.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/ccg/combinator.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/ccg/combinator.py --- ../python3/nltk/ccg/combinator.py (original) +++ ../python3/nltk/ccg/combinator.py (refactored) @@ -4,7 +4,7 @@ # Author: Graeme Gange # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + from nltk.compat import python_2_unicode_compatible from nltk.ccg.api import FunctionalCategory + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/ccg/chart.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/ccg/chart.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/ccg/chart.py --- ../python3/nltk/ccg/chart.py (original) +++ ../python3/nltk/ccg/chart.py (refactored) @@ -29,7 +29,7 @@ This entire process is shown far more clearly in the demonstration: python chart.py """ -from __future__ import print_function, division, unicode_literals + import itertools + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/ccg/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/ccg/api.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/ccg/api.py --- ../python3/nltk/ccg/api.py (original) +++ ../python3/nltk/ccg/api.py (refactored) @@ -4,7 +4,7 @@ # Author: Graeme Gange # URL: # For license information, see LICENSE.TXT -from __future__ import unicode_literals + from nltk.internals import raise_unorderable_types from nltk.compat import (total_ordering, python_2_unicode_compatible, unicode_repr) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/ccg/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/ccg/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/ccg/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/book.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/book.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/book.py --- ../python3/nltk/book.py (original) +++ ../python3/nltk/book.py (refactored) @@ -5,7 +5,7 @@ # # URL: # For license information, see LICENSE.TXT -from __future__ import print_function + from nltk.corpus import (gutenberg, genesis, inaugural, nps_chat, webtext, treebank, wordnet) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/wordnet_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/app/wordnet_app.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/app/wordnet_app.py --- ../python3/nltk/app/wordnet_app.py (original) +++ ../python3/nltk/app/wordnet_app.py (refactored) @@ -44,7 +44,7 @@ # modifying to be compliant with NLTK's coding standards. Tests also # need to be develop to ensure this continues to work in the face of # changes to other NLTK packages. -from __future__ import print_function + # Allow this program to run inside the NLTK source tree. from sys import path @@ -70,7 +70,7 @@ if compat.PY3: from http.server import HTTPServer, BaseHTTPRequestHandler else: - from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler + from http.server import HTTPServer, BaseHTTPRequestHandler # now included in local file # from util import html_header, html_trailer, \ + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/wordfreq_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/app/wordfreq_app.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/app/wordfreq_app.py --- ../python3/nltk/app/wordfreq_app.py (original) +++ ../python3/nltk/app/wordfreq_app.py (refactored) @@ -19,7 +19,7 @@ pylab.xlabel("Samples") pylab.ylabel("Cumulative Percentage") pylab.plot(values) - pylab.xticks(range(len(samples)), [str(s) for s in samples], rotation=90) + pylab.xticks(list(range(len(samples))), [str(s) for s in samples], rotation=90) pylab.show() def app(): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/srparser_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/app/srparser_app.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/app/srparser_app.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/rdparser_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/app/rdparser_app.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/app/rdparser_app.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/nemo_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/app/nemo_app.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/app/nemo_app.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/concordance_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/app/concordance_app.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/app/concordance_app.py --- ../python3/nltk/app/concordance_app.py (original) +++ ../python3/nltk/app/concordance_app.py (refactored) @@ -12,7 +12,7 @@ if nltk.compat.PY3: import queue as q else: - import Queue as q + import queue as q import tkinter.font from tkinter import (Tk, Button, END, Entry, Frame, IntVar, LEFT, Label, Menu, OptionMenu, SUNKEN, Scrollbar, + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/collocations_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/app/collocations_app.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/app/collocations_app.py --- ../python3/nltk/app/collocations_app.py (original) +++ ../python3/nltk/app/collocations_app.py (refactored) @@ -13,7 +13,7 @@ if nltk.compat.PY3: import queue as q else: - import Queue as q + import queue as q from tkinter import (Button, END, Frame, IntVar, LEFT, Label, Menu, OptionMenu, SUNKEN, Scrollbar, StringVar, Text, Tk) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/chunkparser_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/app/chunkparser_app.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/app/chunkparser_app.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/chartparser_app.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/app/chartparser_app.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/app/chartparser_app.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/app/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/app/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/app/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/util.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No files need to be modified. + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/phrase_based.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/align/phrase_based.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/align/phrase_based.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/ibm3.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/align/ibm3.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/align/ibm3.py --- ../python3/nltk/align/ibm3.py (original) +++ ../python3/nltk/align/ibm3.py (refactored) @@ -6,7 +6,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import division + from collections import defaultdict from nltk.align import AlignedSent from nltk.align.ibm2 import IBMModel2 + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/ibm2.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/align/ibm2.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/align/ibm2.py --- ../python3/nltk/align/ibm2.py (original) +++ ../python3/nltk/align/ibm2.py (refactored) @@ -6,7 +6,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import division + from collections import defaultdict from nltk.align import AlignedSent from nltk.corpus import comtrans + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/ibm1.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/align/ibm1.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/align/ibm1.py --- ../python3/nltk/align/ibm1.py (original) +++ ../python3/nltk/align/ibm1.py (refactored) @@ -12,7 +12,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import division + from collections import defaultdict from nltk.align import AlignedSent from nltk.corpus import comtrans + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/gdfa.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/align/gdfa.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/align/gdfa.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/gale_church.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/align/gale_church.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/align/gale_church.py --- ../python3/nltk/align/gale_church.py (original) +++ ../python3/nltk/align/gale_church.py (refactored) @@ -16,7 +16,7 @@ """ -from __future__ import division + import math try: @@ -203,10 +203,10 @@ v = first while v != split_value: yield v - v = it.next() + v = next(it) while True: - yield _chunk_iterator(it.next()) + yield _chunk_iterator(next(it)) def parse_token_stream(stream, soft_delimiter, hard_delimiter): + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/bleu.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/align/bleu.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/align/bleu.py --- ../python3/nltk/align/bleu.py (original) +++ ../python3/nltk/align/bleu.py (refactored) @@ -6,7 +6,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import division + import math @@ -189,7 +189,7 @@ for ngram in counts: max_counts[ngram] = max(max_counts.get(ngram, 0), reference_counts[ngram]) - clipped_counts = dict((ngram, min(count, max_counts[ngram])) for ngram, count in counts.items()) + clipped_counts = dict((ngram, min(count, max_counts[ngram])) for ngram, count in list(counts.items())) return sum(clipped_counts.values()) / sum(counts.values()) + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/api.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/align/api.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/align/api.py --- ../python3/nltk/align/api.py (original) +++ ../python3/nltk/align/api.py (refactored) @@ -7,7 +7,7 @@ # URL: # For license information, see LICENSE.TXT -from __future__ import print_function, unicode_literals + from nltk.compat import python_2_unicode_compatible, string_types from nltk.metrics import precision, recall + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/align/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: No changes to ../python3/nltk/align/__init__.py RefactoringTool: Files that need to be modified: RefactoringTool: ../python3/nltk/align/__init__.py + for i in '$(find ../python3 -type f -name '\''*.py'\'' |grep -v '\''Tables\.py'\'')' + 2to3 -w -n ../python3/nltk/__init__.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored ../python3/nltk/__init__.py RefactoringTool: Files that were modified: RefactoringTool: ../python3/nltk/__init__.py --- ../python3/nltk/__init__.py (original) +++ ../python3/nltk/__init__.py (refactored) @@ -15,7 +15,7 @@ Natural Language Processing with Python. O'Reilly Media Inc. http://nltk.org/book """ -from __future__ import print_function, absolute_import + import os + exit 0 Executing(%build): /bin/sh -e /usr/src/tmp/rpm-tmp.8096 + umask 022 + /bin/mkdir -p /usr/src/RPM/BUILD + cd /usr/src/RPM/BUILD + cd python-module-nltk-3.0.1 + CFLAGS='-pipe -Wall -g -O2' + export CFLAGS + CXXFLAGS='-pipe -Wall -g -O2' + export CXXFLAGS + FFLAGS='-pipe -Wall -g -O2' + export FFLAGS + /usr/bin/python2.7 setup.py build running build running build_py creating build creating build/lib creating build/lib/nltk copying nltk/wsd.py -> build/lib/nltk copying nltk/util.py -> build/lib/nltk copying nltk/treetransforms.py -> build/lib/nltk copying nltk/tree.py -> build/lib/nltk copying nltk/toolbox.py -> build/lib/nltk copying nltk/text.py -> build/lib/nltk copying nltk/probability.py -> build/lib/nltk copying nltk/lazyimport.py -> build/lib/nltk copying nltk/jsontags.py -> build/lib/nltk copying nltk/internals.py -> build/lib/nltk copying nltk/help.py -> build/lib/nltk copying nltk/grammar.py -> build/lib/nltk copying nltk/featstruct.py -> build/lib/nltk copying nltk/downloader.py -> build/lib/nltk copying nltk/decorators.py -> build/lib/nltk copying nltk/data.py -> build/lib/nltk copying nltk/compat.py -> build/lib/nltk copying nltk/collocations.py -> build/lib/nltk copying nltk/book.py -> build/lib/nltk copying nltk/__init__.py -> build/lib/nltk creating build/lib/nltk/tokenize copying nltk/tokenize/util.py -> build/lib/nltk/tokenize copying nltk/tokenize/treebank.py -> build/lib/nltk/tokenize copying nltk/tokenize/texttiling.py -> build/lib/nltk/tokenize copying nltk/tokenize/stanford.py -> build/lib/nltk/tokenize copying nltk/tokenize/simple.py -> build/lib/nltk/tokenize copying nltk/tokenize/sexpr.py -> build/lib/nltk/tokenize copying nltk/tokenize/regexp.py -> build/lib/nltk/tokenize copying nltk/tokenize/punkt.py -> build/lib/nltk/tokenize copying nltk/tokenize/api.py -> build/lib/nltk/tokenize copying nltk/tokenize/__init__.py -> build/lib/nltk/tokenize creating build/lib/nltk/test copying nltk/test/wordnet_fixt.py -> build/lib/nltk/test copying nltk/test/semantics_fixt.py -> build/lib/nltk/test copying nltk/test/segmentation_fixt.py -> build/lib/nltk/test copying nltk/test/runtests.py -> build/lib/nltk/test copying nltk/test/probability_fixt.py -> build/lib/nltk/test copying nltk/test/portuguese_en_fixt.py -> build/lib/nltk/test copying nltk/test/nonmonotonic_fixt.py -> build/lib/nltk/test copying nltk/test/inference_fixt.py -> build/lib/nltk/test copying nltk/test/gluesemantics_malt_fixt.py -> build/lib/nltk/test copying nltk/test/doctest_nose_plugin.py -> build/lib/nltk/test copying nltk/test/discourse_fixt.py -> build/lib/nltk/test copying nltk/test/corpus_fixt.py -> build/lib/nltk/test copying nltk/test/compat_fixt.py -> build/lib/nltk/test copying nltk/test/classify_fixt.py -> build/lib/nltk/test copying nltk/test/childes_fixt.py -> build/lib/nltk/test copying nltk/test/all.py -> build/lib/nltk/test copying nltk/test/align_fixt.py -> build/lib/nltk/test copying nltk/test/__init__.py -> build/lib/nltk/test creating build/lib/nltk/tbl copying nltk/tbl/template.py -> build/lib/nltk/tbl copying nltk/tbl/rule.py -> build/lib/nltk/tbl copying nltk/tbl/feature.py -> build/lib/nltk/tbl copying nltk/tbl/erroranalysis.py -> build/lib/nltk/tbl copying nltk/tbl/demo.py -> build/lib/nltk/tbl copying nltk/tbl/api.py -> build/lib/nltk/tbl copying nltk/tbl/__init__.py -> build/lib/nltk/tbl creating build/lib/nltk/tag copying nltk/tag/util.py -> build/lib/nltk/tag copying nltk/tag/tnt.py -> build/lib/nltk/tag copying nltk/tag/stanford.py -> build/lib/nltk/tag copying nltk/tag/sequential.py -> build/lib/nltk/tag copying nltk/tag/senna.py -> build/lib/nltk/tag copying nltk/tag/mapping.py -> build/lib/nltk/tag copying nltk/tag/hunpos.py -> build/lib/nltk/tag copying nltk/tag/hmm.py -> build/lib/nltk/tag copying nltk/tag/brill_trainer_orig.py -> build/lib/nltk/tag copying nltk/tag/brill_trainer.py -> build/lib/nltk/tag copying nltk/tag/brill.py -> build/lib/nltk/tag copying nltk/tag/api.py -> build/lib/nltk/tag copying nltk/tag/__init__.py -> build/lib/nltk/tag creating build/lib/nltk/stem copying nltk/stem/wordnet.py -> build/lib/nltk/stem copying nltk/stem/snowball.py -> build/lib/nltk/stem copying nltk/stem/rslp.py -> build/lib/nltk/stem copying nltk/stem/regexp.py -> build/lib/nltk/stem copying nltk/stem/porter.py -> build/lib/nltk/stem copying nltk/stem/lancaster.py -> build/lib/nltk/stem copying nltk/stem/isri.py -> build/lib/nltk/stem copying nltk/stem/api.py -> build/lib/nltk/stem copying nltk/stem/__init__.py -> build/lib/nltk/stem creating build/lib/nltk/sem copying nltk/sem/util.py -> build/lib/nltk/sem copying nltk/sem/skolemize.py -> build/lib/nltk/sem copying nltk/sem/relextract.py -> build/lib/nltk/sem copying nltk/sem/logic.py -> build/lib/nltk/sem copying nltk/sem/linearlogic.py -> build/lib/nltk/sem copying nltk/sem/lfg.py -> build/lib/nltk/sem copying nltk/sem/hole.py -> build/lib/nltk/sem copying nltk/sem/glue.py -> build/lib/nltk/sem copying nltk/sem/evaluate.py -> build/lib/nltk/sem copying nltk/sem/drt_glue_demo.py -> build/lib/nltk/sem copying nltk/sem/drt.py -> build/lib/nltk/sem copying nltk/sem/cooper_storage.py -> build/lib/nltk/sem copying nltk/sem/chat80.py -> build/lib/nltk/sem copying nltk/sem/boxer.py -> build/lib/nltk/sem copying nltk/sem/__init__.py -> build/lib/nltk/sem creating build/lib/nltk/parse copying nltk/parse/viterbi.py -> build/lib/nltk/parse copying nltk/parse/util.py -> build/lib/nltk/parse copying nltk/parse/stanford.py -> build/lib/nltk/parse copying nltk/parse/shiftreduce.py -> build/lib/nltk/parse copying nltk/parse/recursivedescent.py -> build/lib/nltk/parse copying nltk/parse/projectivedependencyparser.py -> build/lib/nltk/parse copying nltk/parse/pchart.py -> build/lib/nltk/parse copying nltk/parse/nonprojectivedependencyparser.py -> build/lib/nltk/parse copying nltk/parse/malt.py -> build/lib/nltk/parse copying nltk/parse/generate.py -> build/lib/nltk/parse copying nltk/parse/featurechart.py -> build/lib/nltk/parse copying nltk/parse/earleychart.py -> build/lib/nltk/parse copying nltk/parse/dependencygraph.py -> build/lib/nltk/parse copying nltk/parse/chart.py -> build/lib/nltk/parse copying nltk/parse/api.py -> build/lib/nltk/parse copying nltk/parse/__init__.py -> build/lib/nltk/parse creating build/lib/nltk/misc copying nltk/misc/wordfinder.py -> build/lib/nltk/misc copying nltk/misc/sort.py -> build/lib/nltk/misc copying nltk/misc/minimalset.py -> build/lib/nltk/misc copying nltk/misc/chomsky.py -> build/lib/nltk/misc copying nltk/misc/babelfish.py -> build/lib/nltk/misc copying nltk/misc/__init__.py -> build/lib/nltk/misc creating build/lib/nltk/metrics copying nltk/metrics/spearman.py -> build/lib/nltk/metrics copying nltk/metrics/segmentation.py -> build/lib/nltk/metrics copying nltk/metrics/scores.py -> build/lib/nltk/metrics copying nltk/metrics/paice.py -> build/lib/nltk/metrics copying nltk/metrics/distance.py -> build/lib/nltk/metrics copying nltk/metrics/confusionmatrix.py -> build/lib/nltk/metrics copying nltk/metrics/association.py -> build/lib/nltk/metrics copying nltk/metrics/agreement.py -> build/lib/nltk/metrics copying nltk/metrics/__init__.py -> build/lib/nltk/metrics creating build/lib/nltk/inference copying nltk/inference/tableau.py -> build/lib/nltk/inference copying nltk/inference/resolution.py -> build/lib/nltk/inference copying nltk/inference/prover9.py -> build/lib/nltk/inference copying nltk/inference/nonmonotonic.py -> build/lib/nltk/inference copying nltk/inference/mace.py -> build/lib/nltk/inference copying nltk/inference/discourse.py -> build/lib/nltk/inference copying nltk/inference/api.py -> build/lib/nltk/inference copying nltk/inference/__init__.py -> build/lib/nltk/inference creating build/lib/nltk/draw copying nltk/draw/util.py -> build/lib/nltk/draw copying nltk/draw/tree.py -> build/lib/nltk/draw copying nltk/draw/table.py -> build/lib/nltk/draw copying nltk/draw/dispersion.py -> build/lib/nltk/draw copying nltk/draw/cfg.py -> build/lib/nltk/draw copying nltk/draw/__init__.py -> build/lib/nltk/draw creating build/lib/nltk/corpus copying nltk/corpus/util.py -> build/lib/nltk/corpus copying nltk/corpus/europarl_raw.py -> build/lib/nltk/corpus copying nltk/corpus/__init__.py -> build/lib/nltk/corpus creating build/lib/nltk/cluster copying nltk/cluster/util.py -> build/lib/nltk/cluster copying nltk/cluster/kmeans.py -> build/lib/nltk/cluster copying nltk/cluster/gaac.py -> build/lib/nltk/cluster copying nltk/cluster/em.py -> build/lib/nltk/cluster copying nltk/cluster/api.py -> build/lib/nltk/cluster copying nltk/cluster/__init__.py -> build/lib/nltk/cluster creating build/lib/nltk/classify copying nltk/classify/weka.py -> build/lib/nltk/classify copying nltk/classify/util.py -> build/lib/nltk/classify copying nltk/classify/tadm.py -> build/lib/nltk/classify copying nltk/classify/svm.py -> build/lib/nltk/classify copying nltk/classify/scikitlearn.py -> build/lib/nltk/classify copying nltk/classify/rte_classify.py -> build/lib/nltk/classify copying nltk/classify/positivenaivebayes.py -> build/lib/nltk/classify copying nltk/classify/naivebayes.py -> build/lib/nltk/classify copying nltk/classify/megam.py -> build/lib/nltk/classify copying nltk/classify/maxent.py -> build/lib/nltk/classify copying nltk/classify/decisiontree.py -> build/lib/nltk/classify copying nltk/classify/api.py -> build/lib/nltk/classify copying nltk/classify/__init__.py -> build/lib/nltk/classify creating build/lib/nltk/chunk copying nltk/chunk/util.py -> build/lib/nltk/chunk copying nltk/chunk/regexp.py -> build/lib/nltk/chunk copying nltk/chunk/named_entity.py -> build/lib/nltk/chunk copying nltk/chunk/api.py -> build/lib/nltk/chunk copying nltk/chunk/__init__.py -> build/lib/nltk/chunk creating build/lib/nltk/chat copying nltk/chat/zen.py -> build/lib/nltk/chat copying nltk/chat/util.py -> build/lib/nltk/chat copying nltk/chat/suntsu.py -> build/lib/nltk/chat copying nltk/chat/rude.py -> build/lib/nltk/chat copying nltk/chat/iesha.py -> build/lib/nltk/chat copying nltk/chat/eliza.py -> build/lib/nltk/chat copying nltk/chat/__init__.py -> build/lib/nltk/chat creating build/lib/nltk/ccg copying nltk/ccg/lexicon.py -> build/lib/nltk/ccg copying nltk/ccg/combinator.py -> build/lib/nltk/ccg copying nltk/ccg/chart.py -> build/lib/nltk/ccg copying nltk/ccg/api.py -> build/lib/nltk/ccg copying nltk/ccg/__init__.py -> build/lib/nltk/ccg creating build/lib/nltk/app copying nltk/app/wordnet_app.py -> build/lib/nltk/app copying nltk/app/wordfreq_app.py -> build/lib/nltk/app copying nltk/app/srparser_app.py -> build/lib/nltk/app copying nltk/app/rdparser_app.py -> build/lib/nltk/app copying nltk/app/nemo_app.py -> build/lib/nltk/app copying nltk/app/concordance_app.py -> build/lib/nltk/app copying nltk/app/collocations_app.py -> build/lib/nltk/app copying nltk/app/chunkparser_app.py -> build/lib/nltk/app copying nltk/app/chartparser_app.py -> build/lib/nltk/app copying nltk/app/__init__.py -> build/lib/nltk/app creating build/lib/nltk/align copying nltk/align/util.py -> build/lib/nltk/align copying nltk/align/phrase_based.py -> build/lib/nltk/align copying nltk/align/ibm3.py -> build/lib/nltk/align copying nltk/align/ibm2.py -> build/lib/nltk/align copying nltk/align/ibm1.py -> build/lib/nltk/align copying nltk/align/gdfa.py -> build/lib/nltk/align copying nltk/align/gale_church.py -> build/lib/nltk/align copying nltk/align/bleu.py -> build/lib/nltk/align copying nltk/align/api.py -> build/lib/nltk/align copying nltk/align/__init__.py -> build/lib/nltk/align creating build/lib/nltk/test/unit copying nltk/test/unit/utils.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_tag.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_stem.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_seekable_unicode_stream_reader.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_naivebayes.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_hmm.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_corpus_views.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_corpora.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_collocations.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_classify.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_2x_compat.py -> build/lib/nltk/test/unit copying nltk/test/unit/__init__.py -> build/lib/nltk/test/unit creating build/lib/nltk/corpus/reader copying nltk/corpus/reader/ycoe.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/xmldocs.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/wordnet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/wordlist.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/verbnet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/util.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/udhr.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/toolbox.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/timit.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/tagged.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/switchboard.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/string_category.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/sinica_treebank.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/sentiwordnet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/senseval.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/semcor.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/rte.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/propbank.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/ppattach.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/plaintext.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/pl196x.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/nps_chat.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/nombank.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/lin.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/knbc.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/ipipan.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/indian.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/ieer.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/framenet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/dependency.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/conll.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/cmudict.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/chunked.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/childes.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/chasen.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/bracket_parse.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/bnc.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/api.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/aligned.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/__init__.py -> build/lib/nltk/corpus/reader copying nltk/test/wsd.doctest -> build/lib/nltk/test copying nltk/test/wordnet_lch.doctest -> build/lib/nltk/test copying nltk/test/wordnet.doctest -> build/lib/nltk/test copying nltk/test/util.doctest -> build/lib/nltk/test copying nltk/test/treetransforms.doctest -> build/lib/nltk/test copying nltk/test/tree.doctest -> build/lib/nltk/test copying nltk/test/toolbox.doctest -> build/lib/nltk/test copying nltk/test/tokenize.doctest -> build/lib/nltk/test copying nltk/test/tag.doctest -> build/lib/nltk/test copying nltk/test/stem.doctest -> build/lib/nltk/test copying nltk/test/simple.doctest -> build/lib/nltk/test copying nltk/test/sentiwordnet.doctest -> build/lib/nltk/test copying nltk/test/semantics.doctest -> build/lib/nltk/test copying nltk/test/resolution.doctest -> build/lib/nltk/test copying nltk/test/relextract.doctest -> build/lib/nltk/test copying nltk/test/propbank.doctest -> build/lib/nltk/test copying nltk/test/probability.doctest -> build/lib/nltk/test copying nltk/test/portuguese_en.doctest -> build/lib/nltk/test copying nltk/test/parse.doctest -> build/lib/nltk/test copying nltk/test/paice.doctest -> build/lib/nltk/test copying nltk/test/nonmonotonic.doctest -> build/lib/nltk/test copying nltk/test/misc.doctest -> build/lib/nltk/test copying nltk/test/metrics.doctest -> build/lib/nltk/test copying nltk/test/logic.doctest -> build/lib/nltk/test copying nltk/test/japanese.doctest -> build/lib/nltk/test copying nltk/test/internals.doctest -> build/lib/nltk/test copying nltk/test/inference.doctest -> build/lib/nltk/test copying nltk/test/index.doctest -> build/lib/nltk/test copying nltk/test/grammartestsuites.doctest -> build/lib/nltk/test copying nltk/test/grammar.doctest -> build/lib/nltk/test copying nltk/test/gluesemantics_malt.doctest -> build/lib/nltk/test copying nltk/test/gluesemantics.doctest -> build/lib/nltk/test copying nltk/test/generate.doctest -> build/lib/nltk/test copying nltk/test/framenet.doctest -> build/lib/nltk/test copying nltk/test/featstruct.doctest -> build/lib/nltk/test copying nltk/test/featgram.doctest -> build/lib/nltk/test copying nltk/test/drt.doctest -> build/lib/nltk/test copying nltk/test/discourse.doctest -> build/lib/nltk/test copying nltk/test/dependency.doctest -> build/lib/nltk/test copying nltk/test/data.doctest -> build/lib/nltk/test copying nltk/test/corpus.doctest -> build/lib/nltk/test copying nltk/test/compat.doctest -> build/lib/nltk/test copying nltk/test/collocations.doctest -> build/lib/nltk/test copying nltk/test/classify.doctest -> build/lib/nltk/test copying nltk/test/chunk.doctest -> build/lib/nltk/test copying nltk/test/childes.doctest -> build/lib/nltk/test copying nltk/test/chat80.doctest -> build/lib/nltk/test copying nltk/test/ccg.doctest -> build/lib/nltk/test copying nltk/test/bnc.doctest -> build/lib/nltk/test copying nltk/test/align.doctest -> build/lib/nltk/test copying nltk/VERSION -> build/lib/nltk + export PYTHONPATH=/usr/src/RPM/BUILD/python-module-nltk-3.0.1 + PYTHONPATH=/usr/src/RPM/BUILD/python-module-nltk-3.0.1 + pushd nltk_contrib ~/RPM/BUILD/python-module-nltk-3.0.1/nltk_contrib ~/RPM/BUILD/python-module-nltk-3.0.1 + CFLAGS='-pipe -Wall -g -O2' + export CFLAGS + CXXFLAGS='-pipe -Wall -g -O2' + export CXXFLAGS + FFLAGS='-pipe -Wall -g -O2' + export FFLAGS + /usr/bin/python2.7 setup.py build running build running build_py creating build creating build/lib creating build/lib/nltk_contrib copying nltk_contrib/wals.py -> build/lib/nltk_contrib copying nltk_contrib/timex.py -> build/lib/nltk_contrib copying nltk_contrib/textgrid.py -> build/lib/nltk_contrib copying nltk_contrib/stringcomp.py -> build/lib/nltk_contrib copying nltk_contrib/seqclass.py -> build/lib/nltk_contrib copying nltk_contrib/referring.py -> build/lib/nltk_contrib copying nltk_contrib/featuredemo.py -> build/lib/nltk_contrib copying nltk_contrib/concord.py -> build/lib/nltk_contrib copying nltk_contrib/combined.py -> build/lib/nltk_contrib copying nltk_contrib/__init__.py -> build/lib/nltk_contrib creating build/lib/nltk_contrib/align copying nltk_contrib/align/util.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/test.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/gale_church.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/distance_measures.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/api.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/alignment_util.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/align_util.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/align_regions.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/align.py -> build/lib/nltk_contrib/align copying nltk_contrib/align/__init__.py -> build/lib/nltk_contrib/align creating build/lib/nltk_contrib/bioreader copying nltk_contrib/bioreader/bioreader.py -> build/lib/nltk_contrib/bioreader copying nltk_contrib/bioreader/__init__.py -> build/lib/nltk_contrib/bioreader creating build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/zeror.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/util.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/oner.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/numrange.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/naivebayes.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/knn.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/item.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/instances.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/instance.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/format.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/featureselect.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/distancemetric.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/discretisedattribute.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/discretise.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/decisiontree.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/decisionstump.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/confusionmatrix.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/commandline.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/classify.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/cfile.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/basicimports.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/autoclass.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/attribute.py -> build/lib/nltk_contrib/classifier copying nltk_contrib/classifier/__init__.py -> build/lib/nltk_contrib/classifier creating build/lib/nltk_contrib/classifier/exceptions copying nltk_contrib/classifier/exceptions/systemerror.py -> build/lib/nltk_contrib/classifier/exceptions copying nltk_contrib/classifier/exceptions/invaliddataerror.py -> build/lib/nltk_contrib/classifier/exceptions copying nltk_contrib/classifier/exceptions/illegalstateerror.py -> build/lib/nltk_contrib/classifier/exceptions copying nltk_contrib/classifier/exceptions/filenotfounderror.py -> build/lib/nltk_contrib/classifier/exceptions copying nltk_contrib/classifier/exceptions/__init__.py -> build/lib/nltk_contrib/classifier/exceptions creating build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/zerortests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/onertests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/numrangetests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/naivebayestests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/knntests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/itemtests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/instancetests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/instancestests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/inittests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/formattests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/featureselecttests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/distancemetrictests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/discretisetests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/discretisedattributetests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/decisiontreetests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/decisionstumptests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/confusionmatrixtests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/commandlinetests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/classifytests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/cfiletests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/autoclasstests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/attributetests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/attributestests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/alltests.py -> build/lib/nltk_contrib/classifier_tests copying nltk_contrib/classifier_tests/__init__.py -> build/lib/nltk_contrib/classifier_tests creating build/lib/nltk_contrib/coref copying nltk_contrib/coref/util.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/train.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/tag.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/resolve.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/ne.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/muc7.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/muc.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/freiburg.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/features.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/data.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/chunk.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/api.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/ace2.py -> build/lib/nltk_contrib/coref copying nltk_contrib/coref/__init__.py -> build/lib/nltk_contrib/coref creating build/lib/nltk_contrib/dependency copying nltk_contrib/dependency/util.py -> build/lib/nltk_contrib/dependency copying nltk_contrib/dependency/ptbconv.py -> build/lib/nltk_contrib/dependency copying nltk_contrib/dependency/deptree.py -> build/lib/nltk_contrib/dependency copying nltk_contrib/dependency/__init__.py -> build/lib/nltk_contrib/dependency creating build/lib/nltk_contrib/fst copying nltk_contrib/fst/fst2.py -> build/lib/nltk_contrib/fst copying nltk_contrib/fst/fst.py -> build/lib/nltk_contrib/fst copying nltk_contrib/fst/draw_graph.py -> build/lib/nltk_contrib/fst copying nltk_contrib/fst/__init__.py -> build/lib/nltk_contrib/fst creating build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/util.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/statemachine.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/specialfs.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/sexp.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/morphology.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/link.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/linearizer.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/lexicon.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/fufconvert.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/fuf.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/fstypes.py -> build/lib/nltk_contrib/fuf copying nltk_contrib/fuf/__init__.py -> build/lib/nltk_contrib/fuf creating build/lib/nltk_contrib/hadoop copying nltk_contrib/hadoop/__init__.py -> build/lib/nltk_contrib/hadoop creating build/lib/nltk_contrib/hadoop/hadooplib copying nltk_contrib/hadoop/hadooplib/util.py -> build/lib/nltk_contrib/hadoop/hadooplib copying nltk_contrib/hadoop/hadooplib/reducer.py -> build/lib/nltk_contrib/hadoop/hadooplib copying nltk_contrib/hadoop/hadooplib/outputcollector.py -> build/lib/nltk_contrib/hadoop/hadooplib copying nltk_contrib/hadoop/hadooplib/mapper.py -> build/lib/nltk_contrib/hadoop/hadooplib copying nltk_contrib/hadoop/hadooplib/inputformat.py -> build/lib/nltk_contrib/hadoop/hadooplib copying nltk_contrib/hadoop/hadooplib/__init__.py -> build/lib/nltk_contrib/hadoop/hadooplib creating build/lib/nltk_contrib/lambek copying nltk_contrib/lambek/typedterm.py -> build/lib/nltk_contrib/lambek copying nltk_contrib/lambek/term.py -> build/lib/nltk_contrib/lambek copying nltk_contrib/lambek/lexicon.py -> build/lib/nltk_contrib/lambek copying nltk_contrib/lambek/lambek.py -> build/lib/nltk_contrib/lambek copying nltk_contrib/lambek/__init__.py -> build/lib/nltk_contrib/lambek creating build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/treecanvasview.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/treecanvasnode.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/treecanvas.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/translator.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/sqlviewdialog.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/qba.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/parselpath.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/overlay.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/nodefeaturedialog.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/lpathtree_qt.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/lpathtree.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/dbdialog.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/db.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/axis.py -> build/lib/nltk_contrib/lpath copying nltk_contrib/lpath/__init__.py -> build/lib/nltk_contrib/lpath creating build/lib/nltk_contrib/lpath/lpath copying nltk_contrib/lpath/lpath/tb2tbl.py -> build/lib/nltk_contrib/lpath/lpath copying nltk_contrib/lpath/lpath/lpath.py -> build/lib/nltk_contrib/lpath/lpath copying nltk_contrib/lpath/lpath/__init__.py -> build/lib/nltk_contrib/lpath/lpath creating build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/treeio.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/treeedit_qlistview.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/tree_qt.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/tree.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/tableproxy.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/tableio.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/tableedit_qtable.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/table_qt.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/table.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/myaccel.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/error.py -> build/lib/nltk_contrib/lpath/at_lite copying nltk_contrib/lpath/at_lite/__init__.py -> build/lib/nltk_contrib/lpath/at_lite creating build/lib/nltk_contrib/misc copying nltk_contrib/misc/paradigmquery.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/paradigm.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/marshalbrill.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/marshal.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/lex.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/langid.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/kimmo.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/huffman.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/fsa.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/didyoumean.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/annotationgraph.py -> build/lib/nltk_contrib/misc copying nltk_contrib/misc/__init__.py -> build/lib/nltk_contrib/misc creating build/lib/nltk_contrib/mit copying nltk_contrib/mit/__init__.py -> build/lib/nltk_contrib/mit creating build/lib/nltk_contrib/mit/six863 copying nltk_contrib/mit/six863/__init__.py -> build/lib/nltk_contrib/mit/six863 creating build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/rules.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/pairs.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/morphology.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/kimmotest.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/kimmo.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/fsa.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/featurelite.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/draw.py -> build/lib/nltk_contrib/mit/six863/kimmo copying nltk_contrib/mit/six863/kimmo/__init__.py -> build/lib/nltk_contrib/mit/six863/kimmo creating build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/treeview.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/tree.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/test.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/featurelite.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/featurechart.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/chart.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/cfg.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/category.py -> build/lib/nltk_contrib/mit/six863/parse copying nltk_contrib/mit/six863/parse/__init__.py -> build/lib/nltk_contrib/mit/six863/parse creating build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/treeview.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/tree.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/testw.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/test.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/logic.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/interact.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/featurelite.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/featurechart.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/chart.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/cfg.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/category.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/batchtest.py -> build/lib/nltk_contrib/mit/six863/semantics copying nltk_contrib/mit/six863/semantics/__init__.py -> build/lib/nltk_contrib/mit/six863/semantics creating build/lib/nltk_contrib/mit/six863/tagging copying nltk_contrib/mit/six863/tagging/train.py -> build/lib/nltk_contrib/mit/six863/tagging copying nltk_contrib/mit/six863/tagging/tagparse.py -> build/lib/nltk_contrib/mit/six863/tagging copying nltk_contrib/mit/six863/tagging/drawchart.py -> build/lib/nltk_contrib/mit/six863/tagging copying nltk_contrib/mit/six863/tagging/__init__.py -> build/lib/nltk_contrib/mit/six863/tagging creating build/lib/nltk_contrib/readability copying nltk_contrib/readability/urlextracter.py -> build/lib/nltk_contrib/readability copying nltk_contrib/readability/textanalyzer.py -> build/lib/nltk_contrib/readability copying nltk_contrib/readability/syllables_no.py -> build/lib/nltk_contrib/readability copying nltk_contrib/readability/syllables_en.py -> build/lib/nltk_contrib/readability copying nltk_contrib/readability/readabilitytests.py -> build/lib/nltk_contrib/readability copying nltk_contrib/readability/languageclassifier.py -> build/lib/nltk_contrib/readability copying nltk_contrib/readability/crawler.py -> build/lib/nltk_contrib/readability copying nltk_contrib/readability/__init__.py -> build/lib/nltk_contrib/readability creating build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/util.py -> build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/relational.py -> build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/incremental.py -> build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/gre3d_facts.py -> build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/full_brevity.py -> build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/drawers.py -> build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/constraint.py -> build/lib/nltk_contrib/refexpr copying nltk_contrib/refexpr/__init__.py -> build/lib/nltk_contrib/refexpr creating build/lib/nltk_contrib/rte copying nltk_contrib/rte/logicentail.py -> build/lib/nltk_contrib/rte copying nltk_contrib/rte/__init__.py -> build/lib/nltk_contrib/rte package init file 'nltk_contrib/scripttranscriber/__init__.py' not found (or not a regular file) creating build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/xmlhandler_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/xmlhandler.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/tokens.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/token_comp_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/token_comp.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/thai_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/thai_extractor.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/sample.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/pronouncer_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/pronouncer.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/perceptron_trainer_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/perceptron_trainer.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/perceptron.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/paper_example.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/morph_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/morph.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/miner.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/makeindex.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/japanese_extractor.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/filter_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/filter.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/extractor_unittest.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/extractor.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/documents.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/def_pronouncers.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/chinese_extractor.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/auxiliary_comp.py -> build/lib/nltk_contrib/scripttranscriber copying nltk_contrib/scripttranscriber/alignpairsFST.py -> build/lib/nltk_contrib/scripttranscriber creating build/lib/nltk_contrib/tag copying nltk_contrib/tag/tnt.py -> build/lib/nltk_contrib/tag copying nltk_contrib/tag/__init__.py -> build/lib/nltk_contrib/tag creating build/lib/nltk_contrib/tiger copying nltk_contrib/tiger/tigerxml.py -> build/lib/nltk_contrib/tiger copying nltk_contrib/tiger/index.py -> build/lib/nltk_contrib/tiger copying nltk_contrib/tiger/graph.py -> build/lib/nltk_contrib/tiger copying nltk_contrib/tiger/demo.py -> build/lib/nltk_contrib/tiger copying nltk_contrib/tiger/corpus.py -> build/lib/nltk_contrib/tiger copying nltk_contrib/tiger/__init__.py -> build/lib/nltk_contrib/tiger creating build/lib/nltk_contrib/tiger/indexer copying nltk_contrib/tiger/indexer/tiger_corpus_indexer.py -> build/lib/nltk_contrib/tiger/indexer copying nltk_contrib/tiger/indexer/graph_serializer.py -> build/lib/nltk_contrib/tiger/indexer copying nltk_contrib/tiger/indexer/__init__.py -> build/lib/nltk_contrib/tiger/indexer creating build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/tsqlparser.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/result.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/querybuilder.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/predicates.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/nodesearcher.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/node_variable.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/factory.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/exceptions.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/evaluator.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/constraints.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/ast_visitor.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/ast_utils.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/ast.py -> build/lib/nltk_contrib/tiger/query copying nltk_contrib/tiger/query/__init__.py -> build/lib/nltk_contrib/tiger/query creating build/lib/nltk_contrib/tiger/utils copying nltk_contrib/tiger/utils/parallel.py -> build/lib/nltk_contrib/tiger/utils copying nltk_contrib/tiger/utils/factory.py -> build/lib/nltk_contrib/tiger/utils copying nltk_contrib/tiger/utils/etree_xml.py -> build/lib/nltk_contrib/tiger/utils copying nltk_contrib/tiger/utils/enum.py -> build/lib/nltk_contrib/tiger/utils copying nltk_contrib/tiger/utils/db.py -> build/lib/nltk_contrib/tiger/utils copying nltk_contrib/tiger/utils/__init__.py -> build/lib/nltk_contrib/tiger/utils creating build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/utilities.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/text.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/settings.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/normalise.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/lexicon.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/language.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/iu_mien_hier.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/etreelib.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/errors.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/data.py -> build/lib/nltk_contrib/toolbox copying nltk_contrib/toolbox/__init__.py -> build/lib/nltk_contrib/toolbox package init file 'nltk_contrib/scripttranscriber/__init__.py' not found (or not a regular file) + popd ~/RPM/BUILD/python-module-nltk-3.0.1 + pushd ../python3 ~/RPM/BUILD/python3 ~/RPM/BUILD/python-module-nltk-3.0.1 + CFLAGS='-pipe -Wall -g -O2' + export CFLAGS + CXXFLAGS='-pipe -Wall -g -O2' + export CXXFLAGS + FFLAGS='-pipe -Wall -g -O2' + export FFLAGS + python3 setup.py build running build running build_py creating build creating build/lib creating build/lib/nltk copying nltk/wsd.py -> build/lib/nltk copying nltk/util.py -> build/lib/nltk copying nltk/treetransforms.py -> build/lib/nltk copying nltk/tree.py -> build/lib/nltk copying nltk/toolbox.py -> build/lib/nltk copying nltk/text.py -> build/lib/nltk copying nltk/probability.py -> build/lib/nltk copying nltk/lazyimport.py -> build/lib/nltk copying nltk/jsontags.py -> build/lib/nltk copying nltk/internals.py -> build/lib/nltk copying nltk/help.py -> build/lib/nltk copying nltk/grammar.py -> build/lib/nltk copying nltk/featstruct.py -> build/lib/nltk copying nltk/downloader.py -> build/lib/nltk copying nltk/decorators.py -> build/lib/nltk copying nltk/data.py -> build/lib/nltk copying nltk/compat.py -> build/lib/nltk copying nltk/collocations.py -> build/lib/nltk copying nltk/book.py -> build/lib/nltk copying nltk/__init__.py -> build/lib/nltk creating build/lib/nltk/tokenize copying nltk/tokenize/util.py -> build/lib/nltk/tokenize copying nltk/tokenize/treebank.py -> build/lib/nltk/tokenize copying nltk/tokenize/texttiling.py -> build/lib/nltk/tokenize copying nltk/tokenize/stanford.py -> build/lib/nltk/tokenize copying nltk/tokenize/simple.py -> build/lib/nltk/tokenize copying nltk/tokenize/sexpr.py -> build/lib/nltk/tokenize copying nltk/tokenize/regexp.py -> build/lib/nltk/tokenize copying nltk/tokenize/punkt.py -> build/lib/nltk/tokenize copying nltk/tokenize/api.py -> build/lib/nltk/tokenize copying nltk/tokenize/__init__.py -> build/lib/nltk/tokenize creating build/lib/nltk/test copying nltk/test/wordnet_fixt.py -> build/lib/nltk/test copying nltk/test/semantics_fixt.py -> build/lib/nltk/test copying nltk/test/segmentation_fixt.py -> build/lib/nltk/test copying nltk/test/runtests.py -> build/lib/nltk/test copying nltk/test/probability_fixt.py -> build/lib/nltk/test copying nltk/test/portuguese_en_fixt.py -> build/lib/nltk/test copying nltk/test/nonmonotonic_fixt.py -> build/lib/nltk/test copying nltk/test/inference_fixt.py -> build/lib/nltk/test copying nltk/test/gluesemantics_malt_fixt.py -> build/lib/nltk/test copying nltk/test/doctest_nose_plugin.py -> build/lib/nltk/test copying nltk/test/discourse_fixt.py -> build/lib/nltk/test copying nltk/test/corpus_fixt.py -> build/lib/nltk/test copying nltk/test/compat_fixt.py -> build/lib/nltk/test copying nltk/test/classify_fixt.py -> build/lib/nltk/test copying nltk/test/childes_fixt.py -> build/lib/nltk/test copying nltk/test/all.py -> build/lib/nltk/test copying nltk/test/align_fixt.py -> build/lib/nltk/test copying nltk/test/__init__.py -> build/lib/nltk/test creating build/lib/nltk/tbl copying nltk/tbl/template.py -> build/lib/nltk/tbl copying nltk/tbl/rule.py -> build/lib/nltk/tbl copying nltk/tbl/feature.py -> build/lib/nltk/tbl copying nltk/tbl/erroranalysis.py -> build/lib/nltk/tbl copying nltk/tbl/demo.py -> build/lib/nltk/tbl copying nltk/tbl/api.py -> build/lib/nltk/tbl copying nltk/tbl/__init__.py -> build/lib/nltk/tbl creating build/lib/nltk/tag copying nltk/tag/util.py -> build/lib/nltk/tag copying nltk/tag/tnt.py -> build/lib/nltk/tag copying nltk/tag/stanford.py -> build/lib/nltk/tag copying nltk/tag/sequential.py -> build/lib/nltk/tag copying nltk/tag/senna.py -> build/lib/nltk/tag copying nltk/tag/mapping.py -> build/lib/nltk/tag copying nltk/tag/hunpos.py -> build/lib/nltk/tag copying nltk/tag/hmm.py -> build/lib/nltk/tag copying nltk/tag/brill_trainer_orig.py -> build/lib/nltk/tag copying nltk/tag/brill_trainer.py -> build/lib/nltk/tag copying nltk/tag/brill.py -> build/lib/nltk/tag copying nltk/tag/api.py -> build/lib/nltk/tag copying nltk/tag/__init__.py -> build/lib/nltk/tag creating build/lib/nltk/stem copying nltk/stem/wordnet.py -> build/lib/nltk/stem copying nltk/stem/snowball.py -> build/lib/nltk/stem copying nltk/stem/rslp.py -> build/lib/nltk/stem copying nltk/stem/regexp.py -> build/lib/nltk/stem copying nltk/stem/porter.py -> build/lib/nltk/stem copying nltk/stem/lancaster.py -> build/lib/nltk/stem copying nltk/stem/isri.py -> build/lib/nltk/stem copying nltk/stem/api.py -> build/lib/nltk/stem copying nltk/stem/__init__.py -> build/lib/nltk/stem creating build/lib/nltk/sem copying nltk/sem/util.py -> build/lib/nltk/sem copying nltk/sem/skolemize.py -> build/lib/nltk/sem copying nltk/sem/relextract.py -> build/lib/nltk/sem copying nltk/sem/logic.py -> build/lib/nltk/sem copying nltk/sem/linearlogic.py -> build/lib/nltk/sem copying nltk/sem/lfg.py -> build/lib/nltk/sem copying nltk/sem/hole.py -> build/lib/nltk/sem copying nltk/sem/glue.py -> build/lib/nltk/sem copying nltk/sem/evaluate.py -> build/lib/nltk/sem copying nltk/sem/drt_glue_demo.py -> build/lib/nltk/sem copying nltk/sem/drt.py -> build/lib/nltk/sem copying nltk/sem/cooper_storage.py -> build/lib/nltk/sem copying nltk/sem/chat80.py -> build/lib/nltk/sem copying nltk/sem/boxer.py -> build/lib/nltk/sem copying nltk/sem/__init__.py -> build/lib/nltk/sem creating build/lib/nltk/parse copying nltk/parse/viterbi.py -> build/lib/nltk/parse copying nltk/parse/util.py -> build/lib/nltk/parse copying nltk/parse/stanford.py -> build/lib/nltk/parse copying nltk/parse/shiftreduce.py -> build/lib/nltk/parse copying nltk/parse/recursivedescent.py -> build/lib/nltk/parse copying nltk/parse/projectivedependencyparser.py -> build/lib/nltk/parse copying nltk/parse/pchart.py -> build/lib/nltk/parse copying nltk/parse/nonprojectivedependencyparser.py -> build/lib/nltk/parse copying nltk/parse/malt.py -> build/lib/nltk/parse copying nltk/parse/generate.py -> build/lib/nltk/parse copying nltk/parse/featurechart.py -> build/lib/nltk/parse copying nltk/parse/earleychart.py -> build/lib/nltk/parse copying nltk/parse/dependencygraph.py -> build/lib/nltk/parse copying nltk/parse/chart.py -> build/lib/nltk/parse copying nltk/parse/api.py -> build/lib/nltk/parse copying nltk/parse/__init__.py -> build/lib/nltk/parse creating build/lib/nltk/misc copying nltk/misc/wordfinder.py -> build/lib/nltk/misc copying nltk/misc/sort.py -> build/lib/nltk/misc copying nltk/misc/minimalset.py -> build/lib/nltk/misc copying nltk/misc/chomsky.py -> build/lib/nltk/misc copying nltk/misc/babelfish.py -> build/lib/nltk/misc copying nltk/misc/__init__.py -> build/lib/nltk/misc creating build/lib/nltk/metrics copying nltk/metrics/spearman.py -> build/lib/nltk/metrics copying nltk/metrics/segmentation.py -> build/lib/nltk/metrics copying nltk/metrics/scores.py -> build/lib/nltk/metrics copying nltk/metrics/paice.py -> build/lib/nltk/metrics copying nltk/metrics/distance.py -> build/lib/nltk/metrics copying nltk/metrics/confusionmatrix.py -> build/lib/nltk/metrics copying nltk/metrics/association.py -> build/lib/nltk/metrics copying nltk/metrics/agreement.py -> build/lib/nltk/metrics copying nltk/metrics/__init__.py -> build/lib/nltk/metrics creating build/lib/nltk/inference copying nltk/inference/tableau.py -> build/lib/nltk/inference copying nltk/inference/resolution.py -> build/lib/nltk/inference copying nltk/inference/prover9.py -> build/lib/nltk/inference copying nltk/inference/nonmonotonic.py -> build/lib/nltk/inference copying nltk/inference/mace.py -> build/lib/nltk/inference copying nltk/inference/discourse.py -> build/lib/nltk/inference copying nltk/inference/api.py -> build/lib/nltk/inference copying nltk/inference/__init__.py -> build/lib/nltk/inference creating build/lib/nltk/draw copying nltk/draw/util.py -> build/lib/nltk/draw copying nltk/draw/tree.py -> build/lib/nltk/draw copying nltk/draw/table.py -> build/lib/nltk/draw copying nltk/draw/dispersion.py -> build/lib/nltk/draw copying nltk/draw/cfg.py -> build/lib/nltk/draw copying nltk/draw/__init__.py -> build/lib/nltk/draw creating build/lib/nltk/corpus copying nltk/corpus/util.py -> build/lib/nltk/corpus copying nltk/corpus/europarl_raw.py -> build/lib/nltk/corpus copying nltk/corpus/__init__.py -> build/lib/nltk/corpus creating build/lib/nltk/cluster copying nltk/cluster/util.py -> build/lib/nltk/cluster copying nltk/cluster/kmeans.py -> build/lib/nltk/cluster copying nltk/cluster/gaac.py -> build/lib/nltk/cluster copying nltk/cluster/em.py -> build/lib/nltk/cluster copying nltk/cluster/api.py -> build/lib/nltk/cluster copying nltk/cluster/__init__.py -> build/lib/nltk/cluster creating build/lib/nltk/classify copying nltk/classify/weka.py -> build/lib/nltk/classify copying nltk/classify/util.py -> build/lib/nltk/classify copying nltk/classify/tadm.py -> build/lib/nltk/classify copying nltk/classify/svm.py -> build/lib/nltk/classify copying nltk/classify/scikitlearn.py -> build/lib/nltk/classify copying nltk/classify/rte_classify.py -> build/lib/nltk/classify copying nltk/classify/positivenaivebayes.py -> build/lib/nltk/classify copying nltk/classify/naivebayes.py -> build/lib/nltk/classify copying nltk/classify/megam.py -> build/lib/nltk/classify copying nltk/classify/maxent.py -> build/lib/nltk/classify copying nltk/classify/decisiontree.py -> build/lib/nltk/classify copying nltk/classify/api.py -> build/lib/nltk/classify copying nltk/classify/__init__.py -> build/lib/nltk/classify creating build/lib/nltk/chunk copying nltk/chunk/util.py -> build/lib/nltk/chunk copying nltk/chunk/regexp.py -> build/lib/nltk/chunk copying nltk/chunk/named_entity.py -> build/lib/nltk/chunk copying nltk/chunk/api.py -> build/lib/nltk/chunk copying nltk/chunk/__init__.py -> build/lib/nltk/chunk creating build/lib/nltk/chat copying nltk/chat/zen.py -> build/lib/nltk/chat copying nltk/chat/util.py -> build/lib/nltk/chat copying nltk/chat/suntsu.py -> build/lib/nltk/chat copying nltk/chat/rude.py -> build/lib/nltk/chat copying nltk/chat/iesha.py -> build/lib/nltk/chat copying nltk/chat/eliza.py -> build/lib/nltk/chat copying nltk/chat/__init__.py -> build/lib/nltk/chat creating build/lib/nltk/ccg copying nltk/ccg/lexicon.py -> build/lib/nltk/ccg copying nltk/ccg/combinator.py -> build/lib/nltk/ccg copying nltk/ccg/chart.py -> build/lib/nltk/ccg copying nltk/ccg/api.py -> build/lib/nltk/ccg copying nltk/ccg/__init__.py -> build/lib/nltk/ccg creating build/lib/nltk/app copying nltk/app/wordnet_app.py -> build/lib/nltk/app copying nltk/app/wordfreq_app.py -> build/lib/nltk/app copying nltk/app/srparser_app.py -> build/lib/nltk/app copying nltk/app/rdparser_app.py -> build/lib/nltk/app copying nltk/app/nemo_app.py -> build/lib/nltk/app copying nltk/app/concordance_app.py -> build/lib/nltk/app copying nltk/app/collocations_app.py -> build/lib/nltk/app copying nltk/app/chunkparser_app.py -> build/lib/nltk/app copying nltk/app/chartparser_app.py -> build/lib/nltk/app copying nltk/app/__init__.py -> build/lib/nltk/app creating build/lib/nltk/align copying nltk/align/util.py -> build/lib/nltk/align copying nltk/align/phrase_based.py -> build/lib/nltk/align copying nltk/align/ibm3.py -> build/lib/nltk/align copying nltk/align/ibm2.py -> build/lib/nltk/align copying nltk/align/ibm1.py -> build/lib/nltk/align copying nltk/align/gdfa.py -> build/lib/nltk/align copying nltk/align/gale_church.py -> build/lib/nltk/align copying nltk/align/bleu.py -> build/lib/nltk/align copying nltk/align/api.py -> build/lib/nltk/align copying nltk/align/__init__.py -> build/lib/nltk/align creating build/lib/nltk/test/unit copying nltk/test/unit/utils.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_tag.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_stem.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_seekable_unicode_stream_reader.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_naivebayes.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_hmm.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_corpus_views.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_corpora.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_collocations.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_classify.py -> build/lib/nltk/test/unit copying nltk/test/unit/test_2x_compat.py -> build/lib/nltk/test/unit copying nltk/test/unit/__init__.py -> build/lib/nltk/test/unit creating build/lib/nltk/corpus/reader copying nltk/corpus/reader/ycoe.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/xmldocs.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/wordnet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/wordlist.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/verbnet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/util.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/udhr.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/toolbox.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/timit.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/tagged.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/switchboard.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/string_category.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/sinica_treebank.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/sentiwordnet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/senseval.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/semcor.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/rte.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/propbank.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/ppattach.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/plaintext.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/pl196x.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/nps_chat.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/nombank.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/lin.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/knbc.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/ipipan.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/indian.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/ieer.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/framenet.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/dependency.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/conll.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/cmudict.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/chunked.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/childes.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/chasen.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/bracket_parse.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/bnc.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/api.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/aligned.py -> build/lib/nltk/corpus/reader copying nltk/corpus/reader/__init__.py -> build/lib/nltk/corpus/reader copying nltk/test/wsd.doctest -> build/lib/nltk/test copying nltk/test/wordnet_lch.doctest -> build/lib/nltk/test copying nltk/test/wordnet.doctest -> build/lib/nltk/test copying nltk/test/util.doctest -> build/lib/nltk/test copying nltk/test/treetransforms.doctest -> build/lib/nltk/test copying nltk/test/tree.doctest -> build/lib/nltk/test copying nltk/test/toolbox.doctest -> build/lib/nltk/test copying nltk/test/tokenize.doctest -> build/lib/nltk/test copying nltk/test/tag.doctest -> build/lib/nltk/test copying nltk/test/stem.doctest -> build/lib/nltk/test copying nltk/test/simple.doctest -> build/lib/nltk/test copying nltk/test/sentiwordnet.doctest -> build/lib/nltk/test copying nltk/test/semantics.doctest -> build/lib/nltk/test copying nltk/test/resolution.doctest -> build/lib/nltk/test copying nltk/test/relextract.doctest -> build/lib/nltk/test copying nltk/test/propbank.doctest -> build/lib/nltk/test copying nltk/test/probability.doctest -> build/lib/nltk/test copying nltk/test/portuguese_en.doctest -> build/lib/nltk/test copying nltk/test/parse.doctest -> build/lib/nltk/test copying nltk/test/paice.doctest -> build/lib/nltk/test copying nltk/test/nonmonotonic.doctest -> build/lib/nltk/test copying nltk/test/misc.doctest -> build/lib/nltk/test copying nltk/test/metrics.doctest -> build/lib/nltk/test copying nltk/test/logic.doctest -> build/lib/nltk/test copying nltk/test/japanese.doctest -> build/lib/nltk/test copying nltk/test/internals.doctest -> build/lib/nltk/test copying nltk/test/inference.doctest -> build/lib/nltk/test copying nltk/test/index.doctest -> build/lib/nltk/test copying nltk/test/grammartestsuites.doctest -> build/lib/nltk/test copying nltk/test/grammar.doctest -> build/lib/nltk/test copying nltk/test/gluesemantics_malt.doctest -> build/lib/nltk/test copying nltk/test/gluesemantics.doctest -> build/lib/nltk/test copying nltk/test/generate.doctest -> build/lib/nltk/test copying nltk/test/framenet.doctest -> build/lib/nltk/test copying nltk/test/featstruct.doctest -> build/lib/nltk/test copying nltk/test/featgram.doctest -> build/lib/nltk/test copying nltk/test/drt.doctest -> build/lib/nltk/test copying nltk/test/discourse.doctest -> build/lib/nltk/test copying nltk/test/dependency.doctest -> build/lib/nltk/test copying nltk/test/data.doctest -> build/lib/nltk/test copying nltk/test/corpus.doctest -> build/lib/nltk/test copying nltk/test/compat.doctest -> build/lib/nltk/test copying nltk/test/collocations.doctest -> build/lib/nltk/test copying nltk/test/classify.doctest -> build/lib/nltk/test copying nltk/test/chunk.doctest -> build/lib/nltk/test copying nltk/test/childes.doctest -> build/lib/nltk/test copying nltk/test/chat80.doctest -> build/lib/nltk/test copying nltk/test/ccg.doctest -> build/lib/nltk/test copying nltk/test/bnc.doctest -> build/lib/nltk/test copying nltk/test/align.doctest -> build/lib/nltk/test copying nltk/VERSION -> build/lib/nltk + export PYTHONPATH=/usr/src/RPM/BUILD/python3 + PYTHONPATH=/usr/src/RPM/BUILD/python3 + pushd nltk_contrib ~/RPM/BUILD/python3/nltk_contrib ~/RPM/BUILD/python3 ~/RPM/BUILD/python-module-nltk-3.0.1 + CFLAGS='-pipe -Wall -g -O2' + export CFLAGS + CXXFLAGS='-pipe -Wall -g -O2' + export CXXFLAGS + FFLAGS='-pipe -Wall -g -O2' + export FFLAGS + python3 setup.py build Traceback (most recent call last): File "setup.py", line 13, in import nltk File "/usr/src/RPM/BUILD/python3/nltk/__init__.py", line 117, in from nltk.align import * File "/usr/src/RPM/BUILD/python3/nltk/align/__init__.py", line 15, in from nltk.align.ibm1 import IBMModel1 File "/usr/src/RPM/BUILD/python3/nltk/align/ibm1.py", line 18, in from nltk.corpus import comtrans File "/usr/src/RPM/BUILD/python3/nltk/corpus/__init__.py", line 64, in from nltk.tokenize import RegexpTokenizer File "/usr/src/RPM/BUILD/python3/nltk/tokenize/__init__.py", line 65, in from nltk.tokenize.regexp import (RegexpTokenizer, WhitespaceTokenizer, File "/usr/src/RPM/BUILD/python3/nltk/tokenize/regexp.py", line 201, in blankline_tokenize = BlanklineTokenizer().tokenize File "/usr/src/RPM/BUILD/python3/nltk/tokenize/regexp.py", line 172, in __init__ RegexpTokenizer.__init__(self, r'\s*\n\s*\n\s*', gaps=True) File "/usr/src/RPM/BUILD/python3/nltk/tokenize/regexp.py", line 119, in __init__ self._regexp = compile_regexp_to_noncapturing(pattern, flags) File "/usr/src/RPM/BUILD/python3/nltk/internals.py", line 54, in compile_regexp_to_noncapturing return sre_compile.compile(convert_regexp_to_noncapturing_parsed(sre_parse.parse(pattern)), flags=flags) File "/usr/src/RPM/BUILD/python3/nltk/internals.py", line 50, in convert_regexp_to_noncapturing_parsed parsed_pattern.pattern.groups = 1 AttributeError: can't set attribute error: Bad exit status from /usr/src/tmp/rpm-tmp.8096 (%build) RPM build errors: Bad exit status from /usr/src/tmp/rpm-tmp.8096 (%build) Command exited with non-zero status 1 277.02user 9.12system 9:07.78elapsed 52%CPU (0avgtext+0avgdata 58772maxresident)k 0inputs+0outputs (0major+3015658minor)pagefaults 0swaps hsh-rebuild: rebuild of `python-module-nltk-3.0.1-alt1.1.1.src.rpm' failed. Command exited with non-zero status 1 282.50user 12.39system 9:20.23elapsed 52%CPU (0avgtext+0avgdata 122228maxresident)k 4120inputs+0outputs (0major+3223473minor)pagefaults 0swaps