开发者

Infinite loop when adding a row to a list in a class in python3

I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question.

Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake?

class Dataset:
    individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys
    def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals
    def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value
    def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value
    def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most
    d开发者_StackOverflow社区ef __init__(self, individuals=[]):

and

class Node:
    ds = Dataset() #The data that is associated with this Node
    links = [] #List of Nodes, the offspring Nodes of this node
    level = 0 #Tree depth of this Node
    split_value = '' #Field used to split out this Node from the parent node
    node_value = '' #Value used to split out this Node from the parent Node

    def split_dataset(self, split_value): #Splits the dataset into a series of smaller datasets, each of which has a unique value for split_value.  Then creates subnodes to store these datasets.
        fields = [] #List of options for split_value amongst the individuals
        datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key
        for field in self.ds.field_set()[split_value]: #Populates the keys of fields[]
            fields.append(field)
            datasets[field] = Dataset()
        for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value
            datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit
        for field in fields: #Creates subnodes from each of the datasets.Dataset options
            self.add_subnode(datasets[field],split_value,field)

    def add_subnode(self, dataset, split_value='', node_value=''):
    def __init__(self, level, dataset=Dataset()):

My initialisation code is currently:

if __name__ == '__main__':
    filename = (sys.argv[1]) #Takes in a CSV file
    predicted_value = "# class" #Identifies the field from the CSV file that should be predicted
    base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists
    parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries
    root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset
    root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes
    n = root.links[0] 
    n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.


class Dataset:
    individuals = []

Suspicious. Unless you want to have a static member list shared by all instances of Dataset you shouldn't do that. If you are setting self.individuals= something in the __init__, then you don't need to set individuals here too.

    def __init__(self, individuals=[]):

Still suspicious. Are you assigning the individuals argument to self.individuals? If so, you are assigning the same individuals list, created at function definition time, to every Dataset that is created with a default argument. Add an item to one Dataset's list and all the others created without an explicit individuals argument will get that item too.

Similarly:

class Node:
    def __init__(self, level, dataset=Dataset()):

All Node​s created without an explicit dataset argument will receive the exact same default Dataset instance.

This is the mutable default argument problem and the kind of destructive-iterations it would produce would seem very likely to be causing your infinite loop.


I suspect that you are appending to the same list that you are iterating over causing it to increase in size before the iterator can reach the end of it. Try iterating over a copy of the list instead:

for i in list(self.ds.individuals):
    datasets[i[split_value]].individuals.append(i) 
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜