The data in your sample can often contain duplicate rows. This is just a reality of dealing with data collected automatically, or even a situation created in manually collecting data. Often, it is considered best to err on the side of having duplicates instead of missing data, especially if the data can be considered to be idempotent. However, duplicate data can increase the size of the dataset, and if it is not idempotent, then it would not be appropriate to process the duplicates.
To facilitate finding duplicate data, pandas provides a .duplicates()
method that returns a Boolean Series
where each entry represents whether or not the row is a duplicate. A True
value represents that the specific row has appeared earlier in the DataFrame
object with all column values being identical.
To demonstrate this, the following code creates a DataFrame
object with duplicate rows:
In [40]: # a DataFrame with lots of duplicate data data = pd.DataFrame({'a': ['x'] * 3 + ['y'] * 4, 'b': [1, 1, 2, 3, 3, 4, 4]}) data Out[40]: a b 0 x 1 1 x 1 2 x 2 3 y 3 4 y 3 5 y 4 6 y 4
A DataFrame
object with duplicate rows which were created by the preceding code can be analyzed using .duplicated()
method. This method determines that a row is a duplicate if the values in all columns were seen already in a row earlier in the DataFrame
object:
In [41]: # reports which rows are duplicates based upon # if the data in all columns was seen before data.duplicated() Out[41]: 0 False 1 True 2 False 3 False 4 True 5 False 6 True dtype: bool
Duplicate rows can be dropped from a DataFrame
using the .drop_duplicates()
method. This method will return a copy of the DataFrame
object with the duplicate rows removed.
Duplicate rows can be dropped from a DataFrame
by using the .drop_duplicates()
method. This method will return a copy of the DataFrame
with the duplicate rows removed.
It is also possible to use the inplace=True
parameter to remove the rows without making a copy:
In [42]: # drop duplicate rows retaining first row of the duplicates data.drop_duplicates() Out[42]: a b 0 x 1 2 x 2 3 y 3 5 y 4
Note that there is a ramification to which indexes remain when dropping duplicates. The duplicate records may have different index labels (labels are not taken into account in calculating a duplicate). So, which row is kept can affect the set of labels in the resulting DataFrame
object.
The default operation is to keep the first row of the duplicates. If you want to keep the last row of duplicates, you can use the take_last=True
parameter. The following code demonstrates how the result differs using this parameter:
In [43]: # drop duplicate rows, only keeping the first # instance of any data data.drop_duplicates(take_last=True) Out[43]: a b 1 x 1 2 x 2 4 y 3 6 y 4
If you want to check for duplicates based on a smaller set of columns, you can specify a list of columns names:
In [44]: # add a column c with values 0..6 # this makes .duplicated() report no duplicate rows data['c'] = range(7) data.duplicated() Out[44]: 0 False 1 False 2 False 3 False 4 False 5 False 6 False dtype: bool In [45]: # but if we specify duplicates to be dropped only in columns a & b # they will be dropped data.drop_duplicates(['a', 'b']) Out[45]: a b c 0 x 1 0 2 x 2 2 3 y 3 3 5 y 4 5
3.129.218.69