wish help you to fix your issue I have to do some data analysis on a table with 400+ million rows. I got this to work on a small sample but I'm sure it will run out of memory in production. , You can use lag to do this.
from (select t.*
,lag(status_2) over(partition by serial_no order by date) as prev_status_2
,lag(date) over(partition by serial_no order by date) as prev_date
from tbl t
where status_1 = 'in_transit' and prev_status_2 = 'x'
Hope this helps As you can see, there are lots of ways to go. I think you could do this with a series of loops like @Codoremifa showed you or with some handy add-on packages such as data.table that @RInatM walked you through. I made an example working with the sapply function to loop through the data. First, I calculated the distance between each pair of points in sequence for the whole dataset based on your code. I used with to avoid having to use dollar sign notation or the extract function [. You can see the vector output pairdist is 1 unit shorter than the number of rows in the dataset.
select top (1) with ties
from shpro p1 inner join shpro p2
where cast(p1.orderdate as DATE)>GETDATE() and cast(p1.shipdate as DATE)<GETDATE()-1
Rank() over (partition by p1.prod order by p1.id desc)
CREATE TABLE #TEST
Insert Into #Test
select top 1 with ties id,name
order by row_number() over (partition by id order by id)
Fitting data to Faddeeva function using python's optimize.leastsq() and optimize.curve_fit