Consider this code:
from StringIO import StringIO
import pandas as pd
txt = """a, RR
10, 1asas
20, 1asasas
30,
40, asas
50, ayty
60, 2asas
80, 3asas"""
frame = pd.read_csv(StringIO(txt), skipinitialspace=True)
print frame,"\n\n\n"
l=[]
for i,j in frame[~ frame['RR'].str.startswith("1", na=True)]['RR'].iteritems():
if j.startswith(('2','3')):
if frame[frame['RR'].str.startswith("1", na=False)]['RR'].str.match("1"+j[1:], as_indexer = True).any():
l.append(i)
else:
if frame[frame['RR'].str.startswith("1", na=False)]['RR'].str.match("1"+j, as_indexer = True).any():
l.append(i)
frame = frame.drop(frame.index[l])
print frame
What I am doing here is,
Loop through dataframes to drop any
RRwhich already has1RRin dataframeIf
RRhas2 or 3at start , then drop if thatRRhas1RR[1:]in dataframe.If
RRstartswith1or isNaNdo not touch it.
The code is working fine but this dataframe will have up to 1 million entries and I don't think this code is optimised. As I have just started with pandas I have limited knowledge.
Is there any way we can achieve this without iteration. Does pandas have any in-built utility to do this?
1RR?1+RRlike we have hereasasand also1asasso thatasaswill be droppedseries1 = frame.loc[frame['RR'].str.startswith("1", na=False), 'RR']; frame.loc[(frame['RR'].str.startswith("2")) | (frame['RR'].str.startswith("3")), 'RR'].str.slice(1).isin(series1.str.slice(1))(deals with your second condition).