Is it possible to append Series to rows of DataFrame without making a list first?
Date : March 29 2020, 07:55 AM
wish helps you Maybe an easier way would be to add the pandas.Series into the pandas.DataFrame with ignore_index=True argument to DataFrame.append(). Example - DF = DataFrame()
for sample,data in D_sample_data.items():
SR_row = pd.Series(data.D_key_value)
DF = DF.append(SR_row,ignore_index=True)
In [1]: import pandas as pd
In [2]: df = pd.DataFrame([[1,2],[3,4]],columns=['A','B'])
In [3]: df
Out[3]:
A B
0 1 2
1 3 4
In [5]: s = pd.Series([5,6],index=['A','B'])
In [6]: s
Out[6]:
A 5
B 6
dtype: int64
In [36]: df.append(s,ignore_index=True)
Out[36]:
A B
0 1 2
1 3 4
2 5 6
DF = DF.append(SR_row,ignore_index=True)
DF = DataFrame()
for sample,data in D_sample_data.items():
SR_row = pd.Series(data.D_key_value,name=sample)
DF = DF.append(SR_row)
DF.head()
|
Pandas: Append rows to DataFrame already running through pandas.DataFrame.apply
Tag : python , By : user181945
Date : March 29 2020, 07:55 AM
wish help you to fix your issue I don't think there is a way to use apply the way you envision. And even if there were a way, import pandas as pd
def crawl(url_stack):
url_stack = list(url_stack)
result = []
while url_stack:
url = url_stack.pop()
driver.get(url)
scraped_urls = ...
url_stack.extend(scraped_urls)
something_else = "foobar"
result.append([url, something_else])
return pd.DataFrame(result, columns=["URL", "Something else"])
df = pd.read_csv(spreadsheet.csv, delimiter=",")
df = crawl(df['URL'][::-1])
df.to_csv("result.csv", delimiter=",")
|
Python pandas: Append rows of DataFrame and delete the appended rows
Date : March 29 2020, 07:55 AM
I hope this helps . You can use isin with cumsum for Series, which is use for groupby with apply join function: s = df.id.where(df.id.isin(L)).ffill().astype(int)
df1 = df.groupby(s)['text'].apply(''.join).reset_index()
print (df1)
id text
0 1 abczxc
1 3 qweasfefe
2 6 ertpoiwereer
3 10 poywqr
s = df.id.where(df.id.isin(L)).ffill().astype(int)
print (s)
0 1
1 1
2 3
3 3
4 3
5 6
6 6
7 6
8 6
9 10
10 10
Name: id, dtype: int32
|
How do I calculate mean on filtered rows of a pandas dataframe and append means to all columns of original dataframe?
Date : March 29 2020, 07:55 AM
I wish this help you How can I calculate all column's mean to ONLY rows that aren't equal to zero and append a new row at the bottom with the averages with only one line of code? It doesn't have to be one line, but I'm wondering why this doesn't work? , As John Galt commented need '0' because 0 is string: df = df.append(df[(df.bar != '0')].mean(numeric_only=True), ignore_index=True)
print (df)
foo bar total
0 foo1 bar1 293.090
1 foo2 0 0.000
2 foo3 bar3 342.300
3 NaN NaN 317.695
s = df[(df.bar != '0')].mean(numeric_only=True).reindex(df.columns, fill_value='')
df = df.append(s, ignore_index=True)
print (df)
foo bar total
0 foo1 bar1 293.090
1 foo2 0 0.000
2 foo3 bar3 342.300
3 317.695
df.loc[len(df.index)] = s
print (df)
foo bar total
0 foo1 bar1 293.090
1 foo2 0 0.000
2 foo3 bar3 342.300
3 317.695
|
write rows in pandas dataframe and append it to existing dataframe
Date : March 29 2020, 07:55 AM
wish of those help I have the output of my script as year and the count of word from an article in that particular year : , Something like this should do it: #!/usr/bin/env python
def mkdf(filename):
def combine(term, l):
d = {"term": term}
d.update(dict(zip(l[::2], l[1::2])))
return d
term = None
other = []
with open(filename) as I:
n = 0
for line in I:
line = line.strip()
try:
int(line)
except Exception as e:
# not an int
if term: # if we have one, create the record
yield combine(term, other)
term = line
other = []
n = 0
else:
if n > 0:
other.append(line)
n += 1
# and the last one
yield combine(term, other)
if __name__ == "__main__":
import pandas as pd
import sys
df = pd.DataFrame([r for r in mkdf(sys.argv[1])])
print(df)
2013 2014 term
0 118 23 abcd
1 1 45 xyz
|