Skip to content

Jeongseup/DACON_BitcoinTrader

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

58 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๋ฐ์ด์ฝ˜ ๋น„ํŠธ์ฝ”์ธ ํŠธ๋ ˆ์ด๋” ์‹œ์ฆŒ2 ์Šคํ„ฐ๋””

ํ”„๋กœ์ ํŠธ ๊ฐœ์š”

  • ํ”„๋กœ์ ํŠธ ๊ณผ์ • : ์ธ์ฒœ๋Œ€ํ•™๊ต ์‚ฐ์—…๊ฒฝ์˜๊ณตํ•™๊ณผ ์บก์Šคํ†ค ๋””์ž์ธ(๊ณต๊ณผ๋Œ€ํ•™ ์กธ์—… ์ž‘ํ’ˆ)
  • ํ”„๋กœ์ ํŠธ ๋ช… : ๋จธ์‹ ๋Ÿฌ๋‹์„ ํ†ตํ•œ ๋น„ํŠธ์ฝ”์ธ ๊ฐ€๊ฒฉ ์˜ˆ์ธก
  • ํ”„๋กœ์ ํŠธ ๊ธฐ๊ฐ„ : 21.04.01 ~ 21.05.31

___

ํ”„๋กœ์ ํŠธ ์ˆ˜ํ–‰ ๋ชฉ์ 

๋ณธ '๋จธ์‹ ๋Ÿฌ๋‹์„ ํ†ตํ•œ ๋น„ํŠธ์ฝ”์ธ ๊ฐ€๊ฒฉ์˜ˆ์ธก' ํ”„๋กœ์ ํŠธ๋Š” ์ธ์ฒœ๋Œ€ํ•™๊ต ์‚ฐ์—…๊ฒฝ์˜๊ณตํ•™๊ณผ 4ํ•™๋…„ ์žฌํ•™์ƒ๋“ค์˜ ์กธ์—… ์ž‘ํ’ˆ์„ ์œ„ํ•ด ์‹œ์ž‘ํ•œ ํ”„๋กœ์ ํŠธ์ž…๋‹ˆ๋‹ค. ๊ฐ€๊ธ‰์  ๊ต๋‚ด ๋ฌธ์ œ๊ฐ€ ์•„๋‹Œ ์‹ค์ƒํ™œ์—์„œ ์ ‘๊ทผ๊ฐ€๋Šฅํ•œ ๋ฌธ์ œ๋ฅผ ๋‹ค๋ฃจ๋Š” ๊ฒƒ์— ์ดˆ์ ์„ ๋งž์ถ”์—ˆ์œผ๋ฉฐ, ์ตœ๊ทผ ์ด์Šˆ๊ฐ€ ๋˜๊ณ  ์žˆ๋Š” ๋น„ํŠธ์ฝ”์ธ์— ๋Œ€ํ•ด ๋‹ค๋ค„๋ณด๋Š” ๊ฒƒ์ด ์ข‹๊ฒ ๋‹ค๋ผ๋Š” ๋ชฉ์ ์œผ๋กœ ํ•ด๋‹น ํ”„๋กœ์ ํŠธ๋ฅผ ์‹œ์ž‘ํ•˜๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

์ด๋กœ์จ ์‚ฐ์—…๊ฒฝ์˜๊ณตํ•™ ์กธ์—…์ž‘ํ’ˆ์„ ์œ„ํ•ด ๋ฐ์ด์ฝ˜ ์ธ๊ณต์ง€๋Šฅ ๋น„ํŠธ ํŠธ๋ ˆ์ด๋” ๊ฒฝ์ง„๋Œ€ํšŒ์— ์ฐธ๊ฐ€ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ์ข‹์€ ์„ฑ๊ณผ๋ฅผ ๊ฑฐ๋‘์ง€๋Š” ๋ชปํ•˜์˜€๋Š๋‚˜, ํ•™๋ฌธ์ ์œผ๋กœ ๋‹ค์–‘ํ•œ ์ ‘๊ทผ์„ ์‹œ๋„ํ•ด๋ณด์•˜๋‹ค๋Š” ์ ์— ์˜๋ฏธ๋ฅผ ๋‘๊ณ  ๋ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

๋ณธ ํ”„๋กœ์ ํŠธ์˜ ๊ตฌ์„ฑ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค.

  • Chapter. 1 - EDA
  • Chapter. 2 - Season 1 pilot
  • Chapter. 3 - Personal modeling prediction
  • Chapter. 4 - Data preprocess
  • Chapter. 5 - Pytorch modeling prediction
  • Chapter. 6 - Experiments & Simulation
  • Reference

ํ”„๋กœ์ ํŠธ ์„ค๋ช…

๋ณธ ํ”„๋กœ์ ํŠธ ์ˆ˜ํ–‰์€ ์ด ๋ฌธ์ œ๋Š” Forecasting problem์ด๋ผ๋Š” ๊ฐ€์ •ํ•˜์— ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ํ”„๋กœ์ ํŠธ ์ดˆ๊ธฐ์—๋Š” EDA๋ฅผ ํ†ตํ•ด ์ด๋ฒˆ ํ”„๋กœ์ ํŠธ์—์„œ ๋‹ค๋ฃฐ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์ดํ•ด๋ฅผ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. (์ผ๋ฐ˜์ ์ธ ๊ฐ€๊ฒฉ ๋ฐ์ดํ„ฐ ์ด์™ธ์— ์ง๊ด€์ ์œผ๋กœ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šด ๋ฐ์ดํ„ฐ๋“ค์ด ์กด์žฌํ•˜์˜€๊ธฐ์— ์ˆ˜ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค) ๋˜ํ•œ, ์‹œ์ฆŒ1 ์ด ์กด์žฌํ•˜์˜€๊ธฐ์— ๊ธฐ์กด ํŒ€๋“ค์€ ์ด ๋ฌธ์ œ์— ๋Œ€ํ•ด ์–ด๋–ป๊ฒŒ ์ ‘๊ทผํ•˜์˜€๋Š”์ง€ ์‚ดํŽด๋ณด์•˜๊ณ  ์ดํ›„ ๊ธฐ๋ณธ์ ์ธ ARIMA ์™€ Prophet ๋ชจ๋ธ์„ ๊ธฐ์ค€์œผ๋กœ ๋‘๊ณ  ์ดํ›„ Pytorch framework์™€ Tensorflow framework๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ Neural Netwokr ๋ชจ๋ธ๋“ค๋„ ์ˆ˜ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

์ˆ˜ํ–‰๊ณผ์ •์„ ๊ฐ„๋‹จํžˆ ์„ค๋ช…๋“œ๋ฆฌ์ž๋ฉด, ๋”ฅ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ๋“ค(LSTM, Conv1d, Seq2Seq)์˜ ๋Šฅ๋ ฅ์ด ๊ธฐ๋ณธ ๋ชจ๋ธ์ธ ARIMA์™€ Prophet ๋ชจ๋ธ๋“ค์— ๋น„ํ•ด ํƒ์›”ํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ ์ €ํฌ๋Š” ๊ฐ€๊ฒฉ ๋ฐ์ดํ„ฐ ๋ณ€๋™ํญ์ด ๋„ˆ๋ฌด ์ปค์„œ ์ผ๋ฐ˜์ ์ธ forecasting modeling์„ ๋ฐ”๋กœ ํ•  ์ˆ˜ ์—†๋‹ค๊ณ  ํŒ๋‹จํ•˜์—ฌ ์ถ”๊ฐ€์ ์ธ data handling์„ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. (simple exponential smoothing๊ณผ moving average smoothing ๊ทธ๋ฆฌ๊ณ  data discretize ๋ฅผ ํ•ด๋ณด์•˜์Šต๋‹ˆ๋‹ค. ์ถ”๊ฐ€๋กœ Fractional differencing์„ ํ•ด๋ณด๊ธฐ๋„ ํ•˜์˜€์œผ๋‚˜ ARIMA ๋ชจ๋ธ์—์„œ๋งŒ ๋‹ค๋ค„๋ดค์Šต๋‹ˆ๋‹ค)

data handling ์ดํ›„์—๋Š” ๋ณด๋‹ค ๋‚˜์€ ๊ฒฐ๊ณผ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ƒ๊ฐํ•˜์˜€์œผ๋‚˜ ์ฝ”์ธ ๋ฐ์ดํ„ฐ์˜ ๋ณ€๋™์„ ์„ค๋ช…ํ•˜๊ธฐ์—๋Š” ์—ญ๋ถ€์กฑ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๋ก ์ ์œผ๋กœ ์—ฌ๋Ÿฌ ์‹คํ—˜์„ ํ•ด๋ณด์•˜์œผ๋‚˜ ๋ถ€๋ถ„์ ์œผ๋กœ ๋ผ๋„ ์˜ˆ์ธก๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ๋ง์„ ์ˆ˜ํ–‰ํ•˜์ง€๋Š” ๋ชปํ•˜์˜€์Šต๋‹ˆ๋‹ค.


ํ”„๋กœ์ ํŠธ ๋ฐœํ‘œ์˜์ƒ

youtube link : https://www.youtube.com/watch?v=-ZSlri43b5A


Chapter. 1 - EDA(Exploratory Data Analysis)

train_x_df EDA ๊ณผ์ • ์„ค๋ช…

  • sample_id : ํ•œ ์‹œํ€€์Šค ์ƒ˜ํ”Œ, ํ•œ ์‹œํ€€์Šค๋Š” 1380๋ถ„์˜ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ๋กœ ๊ตฌ์„ฑ ์•„๋ž˜ ์˜ˆ์‹œ


Figure. ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ ์˜ˆ์‹œ

In one sample, dataset description

  • X : 1380๋ถ„(23์‹œ๊ฐ„)์˜ ์—ฐ์† ๋ฐ์ดํ„ฐ
  • Y : 120๋ถ„(2์‹œ๊ฐ„)์˜ ์—ฐ์† ๋ฐ์ดํ„ฐ
  • 23์‹œ๊ฐ„ ๋™์•ˆ์˜ ๋ฐ์ดํ„ฐ ํ๋ฆ„์„ ๋ณด๊ณ  ์•ž์œผ๋กœ์˜ 2์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ
  • sample_id๋Š” 7661๊ฐœ์˜ ์„ธํŠธ ๊ตฌ์„ฑ, ๊ฐ ์„ธํŠธ๋Š” ๋…๋ฆฝ์ ์ธ dataset
  • coin_index๋Š” ์ด 10๊ฐœ ์ข…๋ฅ˜๋กœ ๊ตฌ์„ฑ(index number is 0 ~ 9)

์ฝ”์ธ๋ณ„ ์ƒ˜ํ”Œ ๊ฐœ์ˆ˜

  • ๊ฐ ์ฝ”์ธ๋ณ„๋กœ ์ƒ˜ํ”Œ ๊ฐœ์ˆ˜๋Š” ๋‹ค๋ฆ„
  • 9, 8๋ฒˆ์˜ ์ƒ˜ํ”Œ ์ˆ˜๊ฐ€ ๊ฐ€์žฅ ๋งŽ์Œ


Figure. ์ฝ”์ธ ์ธ๋ฑ์Šค ๋ณ„ ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ ๊ฐœ์ˆ˜

๋ชจ๋ฅด๋Š” ๋ฐ์ดํ„ฐ ํ”ผ์ณ ์กฐ์‚ฌ

  • 'Volume' - ' Taker buy base asset volume' = ' Maker buy base asset volume'

source by : https://www.binance.kr/apidocs/#individual-symbol-mini-ticker-stream

  • quote asset volume = coin volume / btc volume

quote asset volume = Volume expressed in quote asset units. For pair DOGE/ BTC the volume is shown in BTC , instead of DOGE.

์˜ˆ์‹œ) ๊ฐ€์ƒํ™”ํ/๊ฑฐ๋ž˜ํ™”ํ์—์„œ ๊ฑฐ๋ž˜ํ™”ํ์˜ ์–‘
ํ•œ๊ตญ๋ˆ์œผ๋กœ ๋Œ๊ณ ๋Œ์•„ ๊ณ„์‚ฐ(100๋งŒ)
ex) btc/usdt ๋ฉด usdt์˜ ๊ฐ€์น˜ 57000*1200์—์„œ์˜ qav = 100๋งŒ/1200 => 8๋งŒxxx
btc/krw๋ฉด btc์˜ ๊ฐ€์น˜ 7400๋งŒ์—์„œ์˜ qav = 100๋งŒ
tb_base_av
coin / xxxxx
volume / quote_av
0 = 19.xxxxx
1 = 0.028xxxxx
2 = 0.268xxxxx
3 = 0.238 xxxxx
4 = 2.1312xxxx
5 = 52.1123xxxx(maximum coin)
6= 0.22421
7= 19.3821
8 = 0.003426
9 = 0.00013(minimum coin)
====> ์ž‘์„์ˆ˜๋ก ๋น„์‹ผ ์ฝ”์ธ์œผ๋กœ ์ถ”์ •

Open price outlier problem

  • ์ƒ˜ํ”Œ ๋‚ด outlier ๋„ˆ๋ฌด ๋นˆ๋„๊ฐ€ ์ ๊ณ , regression์œผ๋กœ ํ•™์Šตํ•˜๊ธฐ ์–ด๋ ค์›€(raw, smoothing, log smoothing ๋ณ„ ์ฐจ์ด ์—†์Œ)


Figure. open price distribution plot


Figure. price box plot

  • open price outlier detection tempary method code
for temp_arr in outlier_arr:
    plt.plot(temp_arr, label = 'True series')
    plt.ylim(open_arr.min(), open_arr.max())
    plt.legend()
    plt.show()

filtered_y_df = raw_y_df[~raw_y_df["sample_id"].isin(outlier_list)]


Figure. outlier range boxplot

EDA code

coin eda code link : here

Data handling memo

  1. greedy feature add based on taker volumn data
''' greedy feature handleing'''
# test_df = train_x_df[train_x_df['volume'] != 0]
# test_df['rest_asset'] = test_df['volume'] - test_df['tb_base_av']
# test_df['greedy'] = test_df['tb_base_av'] / test_df['volume']

# test_df2 = test_df[['time', 'coin_index', 'open', 'high', 'low', 'close', 'volume', 'trades', 'tb_base_av','rest_asset', 'greedy']]
# test_df2[['coin_index','trades', 'volume', 'tb_base_av','rest_asset', 'greedy']].head()
# test_df2[test_df2['greedy'] == 1][['coin_index','trades', 'volume', 'tb_base_av','rest_asset', 'greedy']].head()
  1. ๋ณ€๋™ํญ feature add based on high and low price difference
print(
    f'''
    {df.high.max()}
    {df.low.max()}
    {df.open.max()}
    {df.close.max()}
    
    
    {df.high.min()}
    {df.low.min()}
    {df.open.min()}
    {df.close.min()}
    
    '''
    
    ''' high - low = ๋ณ€๋™ํญ \n'''
    ''' ์Œ๋ด‰์–‘๋ด‰ ๊ตฌ๋ถ„ ์ถ”๊ฐ€ ๊ฐ€๋Šฅ'''
)

Chapter. 2 - Season 1 model pilot

  • sample id = 0, open data series๋กœ๋งŒ ๋ชจ๋ธ๋ง ์ง„ํ–‰

ARIMA modeling

# ARIMA model fitting : model arguments ๋Š” ์ž„์˜๋กœ ์ง„ํ–‰
model = ARIMA(x_series, order=(3,0,1))
fit  = model.fit()
pred_by_arima = fit.predict(1381, 1380+120, typ='levels')

Prophet modeling

# pprophet ๋ชจ๋ธ ํ•™์Šต 
prophet = Prophet(seasonality_mode='multiplicative', 
                  yearly_seasonality=False,
                  weekly_seasonality=False, daily_seasonality=True,
                  changepoint_prior_scale=0.06)
prophet.fit(x_df)

future_data = prophet.make_future_dataframe(periods=120, freq='min')
forecast_data = prophet.predict(future_data)
result plot


Figure. season 1 model pilot

season 1 pilot code

season 1 pilot code link : here


Chapter. 3 - Personal modeling prediction

  • ๊ธฐ์กด์˜ driving ๋ฐฉ์‹์ฒ˜๋Ÿผ trian_x์—์„œ open column๋งŒ ํ™œ์šฉํ•˜์—ฌ yhat predictํ•จ.

ARIMA trial

  • ์šฐ์„  ๊ธฐ์กด ARIMA ๋ฐฉ๋ฒ•์„ Baseline์œผ๋กœ ์žก๊ณ , ์ง„ํ–‰
  • hyperparameter p,d,q๋Š” ์ž„์˜๋กœ ์žก์Œ

  • ARIMA python code
def train(x_series, y_series, args):
    
    model = ARIMA(x_series, order=(2,0,2))
    fit  = model.fit()
    
    y_pred = fit.predict(1381, 1380+120, typ='levels')
    error = mean_squared_error(y_series, y_pred)
    plotting(y_series, y_pred, args.sample_id)

    return error*10E5
result


Figure. open price ARIMA prediction plot

Colab link : https://colab.research.google.com/drive/1x28Mi9MSqqkSTO2a8UU0wXDzgXNy2WT9?usp=sharing

Prophet trial

  • hyperparameter๋Š” ์ž„์˜๋กœ ์„ค์ •, seasonality๋Š” ์ฝ”์ธ ๋ฐ์ดํ„ฐ๊ฐ€ addtitive ๋ณด๋‹ค๋Š” multiplicative๊ฐ€ ์ ํ•ฉํ•˜๋‹ค๊ณ  ํŒ๋‹จ

  • prophet python code
prophet= Prophet(seasonality_mode='multiplicative',
                  yearly_seasonality='auto',
                  weekly_seasonality='auto', daily_seasonality='auto',
                  changepoint_range=0.9,  
                  changepoint_prior_scale=0.1  # ์˜ค๋ฒ„ํ”ผํŒ…, ์–ธ๋”ํ”ผํŒ…์„ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ์กฐ์ •
                )

prophet.add_seasonality(name='first_seasonality', period=1/12, fourier_order=7) # seasonality ์ถ”๊ฐ€
prophet.add_seasonality(name='second_seasonality', period=1/8, fourier_order=15) # seasonality ์ถ”๊ฐ€

prophet.fit(x_df)

future_data = prophet.make_future_dataframe(periods=120, freq='min')
forecast_data = prophet.predict(future_data)
sample_id = 1, dataset ์˜ˆ์ธก ๊ฒฐ๊ณผ


Figure. open price prophet prediction plot


Colab link : https://colab.research.google.com/drive/1dDf6AIln31catWWDsrB_lbL-0M5DsZTd?usp=sharing

Neural Prophet trial

  • hyperparameter ์ž„์˜๋กœ ์žก์Œ, seasonality mode๋Š” ์ด์ „ prophet model์ฒ˜๋Ÿผ mulplicative๋กœ ์ง„ํ–‰

  • neural prophet python code
def prophet_preprocessor(x_series):
    
    # start time initialization
    start_time = '2021-01-01 00:00:00'
    start_dt = datetime.datetime.strptime(start_time, '%Y-%m-%d %H:%M:%S')

    # datafram ๋งŒ๋“ค๊ธฐ
    x_df = pd.DataFrame()
    # ๋ถ„๋‹น ์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ ์‹œ๋ฆฌ์ฆˆ ์ž…๋ ฅ
    x_df['ds'] = [start_dt + datetime.timedelta(minutes = time_min) for time_min in np.arange(1, x_series.shape[0]+1).tolist()]
    # ๊ฐ€๊ฒฉ ๋ฐ์ดํ„ฐ ์‹œ๋ฆฌ์ฆˆ ์ž…๋ ฅ
    x_df['y'] = x_series.tolist()

    return x_df


def train(x_series, y_series, **paras):
    
    x_df = prophet_preprocessor(x_series)
    
    model = NeuralProphet(
                          n_changepoints = paras['n_changepoints'],
                          changepoints_range = paras['changepoints_range'],
                          num_hidden_layers = paras['num_hidden_layers'],
            
                          learning_rate = 0.1, epochs = 40, batch_size = 32,
                          seasonality_mode = 'multiplicative', 
                          yearly_seasonality = False, weekly_seasonality = False, daily_seasonality = False,
                          normalize='minmax'
                         )
    
    model.add_seasonality(name='first_seasonality', period=1/24, fourier_order=5) 
    model.add_seasonality(name='second_seasonality', period=1/12, fourier_order=10)

    metrics = model.fit(x_df, freq="min")

    future = model.make_future_dataframe(x_df, periods=120)
    forecast = model.predict(future)
    error = mean_squared_error(y_series, forecast.yhat1.values[-120:])

    return error

Colab link : https://colab.research.google.com/drive/1E38kkH2mfFgnGKj89t2mLZV6xg7rPQl8?usp=sharing

Fractional differencing ARIMA trial

  • ์ผ๋ฐ˜์ ์œผ๋กœ, ์ฐจ๋ถ„์„ ํ•ด๋ฒ„๋ฆฌ๋ฉด ์‹œ์ฆˆ๋„์ด ์ƒ๊ธฐ์ง€๋งŒ ๊ทธ ๋งŒํผ ๊ธฐ์กด ๋ฐ์ดํ„ฐ๊ฐ€ ๋ณ€ํ˜•๋˜์–ด ์ •๋ณด ์†์‹ค์ด ์ƒ๊น€. ์ด๋ฅผ ์ปค๋ฒ„ํ•˜๊ธฐ ์œ„ํ•ด, ์‹ค์ˆ˜ ์ฐจ๋ถ„์˜ ๊ฐœ๋…์ด ๋„์ž…

  • ์‹ค์ˆ˜ ์ฐจ์›์˜ ์ฐจ๋ถ„ ์‹œ๊ณ„์—ด : https://m.blog.naver.com/chunjein/222072460703


  • fractional differecing ARIMA code
#์ฐจ๋ถ„์šฉ ํ•จ์ˆ˜
def getWeights_FFD(d, size, thres):
    w = [1.]  # w์˜ ์ดˆ๊นƒ๊ฐ’ = 1

    for k in range(1, size):

        w_ = -w[-1] * (d - k + 1) / k  # ์‹ 2)๋ฅผ ์‚ฌ์šฉํ–ˆ๋‹ค.

        if abs(w[-1]) >= thres and abs(w_) <= thres:
            break
        else:
            w.append(w_)

    # w์˜ inverse
    w = np.array(w[::-1]).reshape(-1, 1)
    return w


def fracDiff_FFD(series, d, thres=0.002):
    '''
    Constant width window (new solution)

    Note 1: thres determines the cut-off weight for the window
    Note 2: d can be any positive fractional, not necessarily bounded [0,1]
    '''

    # 1) Compute weights for the longest series
    w = getWeights_FFD(d, series.shape[0], thres)

    width = len(w) - 1

    # 2) Apply weights to values
    df = []
    seriesF = series

    for iloc in range(len(w), seriesF.shape[0]):
        k = np.dot(w.T[::-1], seriesF[iloc - len(w):iloc])
        df.append(k)

    df = np.array(df)
    return df, w

# ์‹ค์ˆ˜ ์ฐจ๋ถ„ ์˜ˆ์‹œ
x_series = train_x_array[idx,:,data_col_idx]

# fractional differecing 
fdiff, fdiff_weight = fracDiff_FFD(x_series, d=0.2, thres=0.002)
differencing_x_series = fdiff.reshape(fdiff.shape[0],)

# ARIMA modeling
model = ARIMA(differencing_x_series, order =(2,0,2))
fitted_model = model.fit()
pred_y_series = fitted_model.predict(1,120, type='levels')

# scale control : ์‹ค์ˆ˜ ์ฐจ๋ถ„์„ ํ•˜๋ฉด ์‹œ๊ณ„์—ด์„ฑ ๋ฐ ์ •๋ณด๋Š” ์–ด๋Š ์ •๋„ ๋ณด์กด๋˜์ง€๋งŒ, ๋ฐ์ดํ„ฐ ์Šค์ผ€์ผ์ด ๋‹ฌ๋ผ์ง. 1380๋ถ„์˜ ๋ฐ์ดํ„ฐ๊ฐ€ 1์ด ๋˜๋„๋ก ๋งž์ถฐ์คŒ.
first_value = pred_y_series[0]
scale_controler = 1 / first_value
scaled_pred_y_series = scale_controler * pred_y_series

Colab link : https://colab.research.google.com/drive/19hrQP6nI-KgVwWu9Udp2fbntYCjpnHG9?usp=sharing

Keras RNN models trial

  • ๋น„์Šทํ•œ ๋ฐฉ์‹์œผ๋กœ open ๊ฐ€๊ฒฉ ๋ฐ์ดํ„ฐ๋งŒ์ด ์•„๋‹Œ, feature๊นŒ์ง€ ํ™œ์šฉํ•ด์„œ driving
  • keras moduler LSTM ์ด๋ž‘ GRU ์‹œ๋„
keras LSTM code
#๋ชจ๋ธํ•™์Šต
class CustomHistory(keras.callbacks.Callback):
    def init(self):
        self.train_loss = []
        self.val_loss = []
        
    def on_epoch_end(self, batch, logs={}):
        self.train_loss.append(logs.get('loss'))
        self.val_loss.append(logs.get('val_loss'))


def train(x_train, y_train, n_epoch, n_batch, x_val, y_val):

    #๋ชจ๋ธ
    model = Sequential()
    model.add(LSTM(128, return_sequences=True, input_shape= (x_train.shape[1],x_train.shape[2] )))
    model.add(LSTM(64, return_sequences=False))
    model.add(Dense(25, activation='relu'))
    model.add(Dense(1))

    # ๋ชจ๋ธ ํ•™์Šต๊ณผ์ • ์„ค์ •ํ•˜๊ธฐ
    model.compile(loss='mean_squared_error', optimizer='adam')

    # ๋ชจ๋ธ ํ•™์Šต์‹œํ‚ค๊ธฐ
    custom_hist = CustomHistory()
    custom_hist.init()

    #๋ชจ๋ธ ๋Œ๋ ค๋ณด๊ธฐ
    model.fit(x_train, y_train, epochs=n_epoch, batch_size=n_batch, shuffle=True, callbacks=[custom_hist], validation_data=(x_val, y_val), verbose=1)

    return model
result


Figure. Keras LSTM predition plot


Colab link : https://colab.research.google.com/drive/1oCCXpJSlLXDs6x968eYrIPQtzEo0klMq?usp=sharing

keras GRU code
# GRU๋กœ๋„ ์‹œ๋„ ํ•ด๋ด„.
model = keras.models.Sequential(
    [
     keras.layers.Bidirectional(layers.GRU(units = 50, return_sequences =True), input_shape=(x_frames, 1)), 
     keras.layers.GRU(units = 50),
     keras.layers.Dense(1)
    ]
)

model.compile(optimizer='adam', loss='mse')
model.summary()
result


Figure. Keras GRU predition plot


Colab link : https://colab.research.google.com/drive/1w2GZXVXSjRX-tlI49WAcC77szQaK_H6R?usp=sharing


Chapter. 4 - Data preprocess

Data smoothing

  • ์ดํ›„, DNN ๊ณ„์—ด์˜ ๋ชจ๋ธ๋ง์„ ์‹œ๋„ํ–ˆ์œผ๋‚˜, ์ œ๋Œ€๋กœ regression์ด ๋˜์ง€ ์•Š์Œ. -> ๊ธฐ์กด ๋ฐ์ดํ„ฐ๋Š” ๋„ˆ๋ฌด ์ง„ํญ์ด ์‹ฌํ•ด์„œ ๋ชจ๋ธ์ด regression์„ ํ•˜๊ธฐ ์–ด๋ ต๋‹ค๊ณ  ํŒ๋‹จํ•จ

  • smoothing method 1 : simple exponential smoothing

Exponential smoothing is a time series forecasting method for univariate data that can be extended to support data with a systematic trend or seasonal component. It is a powerful forecasting method that may be used as an alternative to the popular Box-Jenkins ARIMA family of methods.

  • smoothing method 2 : moving average

Smoothing is a technique applied to time series to remove the fine-grained variation between time steps. The hope of smoothing is to remove noise and better expose the signal of the underlying causal processes. Moving averages are a simple and common type of smoothing used in time series analysis and time series forecasting.

  • smoothing python code
def simple_exponetial_smoothing(arr, alpha=0.3):
    
    y_series = list()
    
    for temp_arr in arr:
        target_series = temp_arr[:, 1].reshape(-1) # open col is 1 index

        smoother = SimpleExpSmoothing(target_series, initialization_method="heuristic").fit(smoothing_level=0.3,optimized=False)
        smoothing_series = smoother.fittedvalues
        
        y_series.append(smoothing_series)
    
    return np.array(y_series)


def moving_average(arr, window_size = 10):
    
    #length = ma ๋ช‡ ํ• ์ง€
    length = window_size
    ma = np.zeros((arr.shape[0], arr.shape[1] - length, arr.shape[2]))

    for idx in range(arr.shape[0]):
        for i in range(length, arr.shape[1]):
            for col in range(arr.shape[2]):
                ma[idx, i-length, col] = arr[idx,i-length:i, col].mean() #open
            
    return ma[:, :, 1] # open col is 1
smoothing result

Figure. price data smoothing plot

Data discretize

  • y๊ฐ’ open ๋ฐ์ดํ„ฐ๊ฐ€ ์ง„ํญ์ด ๋„ˆ๋ฌด ํฐ outlier ๋ฐ์ดํ„ฐ๊ฐ€ ๋„ˆ๋ฌด ๋งŽ์•„์„œ, true y์„ prediction ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค y ๊ฐ’์˜ ํŒจํ„ด ์–‘์ƒ๋งŒ์„ ํ•™์Šตํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ๋ฐ”๊ฟ” driving
  • discretize method : KBinsdiscretizer library(in scikit-learn)

  • kbinsdiscretizer python code
from sklearn.preprocessing import KBinsDiscretizer
kb = KBinsDiscretizer(n_bins=10, strategy='uniform', encode='ordinal')
kb.fit(open_y_series)
#  ์ด๋•Œ `bin_edges_` ๋ฉ”์†Œ๋“œ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ €์žฅ๋˜์–ด์ง„ ๊ฒฝ๊ณ„๊ฐ’์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.
print("bin edges :\n", kb.bin_edges_ )
Discretize result


Figure. kbinsdiscretizer before & after plot

Data log normalization

  • ๋ฐ์ดํ„ฐ ์ธํ’‹ ์‹œ open data ์ด์™ธ์— ๋‹ค๋ฅธ feature์„ ๊ฐ™์ด ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฐฉ๋ฒ•์œผ๋กœ normalization์„ ์ทจํ•ด์คŒ. ์ผ๋ฐ˜์ ์ธ scikit-learn normalizizer์€ ๋ฐ”๋กœ ์‚ฌ์šฉํ•˜๊ธฐ์—๋Š” ๋Œ€ํšŒ ๋‚ด์—์„œ 1380๋ถ„์ผ ๋•Œ์˜ open price๋ฅผ 1๋กœ ์ˆ˜์ •ํ•˜๋ฉด์„œ ์ „๋ฐ˜์ ์ธ ์ „์ฒ˜๋ฆฌ๊ฐ€ ์ด๋ฏธ ํ•œ๋ฒˆ ๋œ ์ƒํƒœ์ด๊ธฐ ๋•Œ๋ฌธ์— ํ•ด๋‹น ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•จ.

  • log normalizer python code

data = data.apply(lambda x: np.log(x+1) - np.log(x[self.x_frames-1]+1))

Chapter. 5 - Pytorch modeling

Pytorch LSTM trial

Only coin 9, smoothing & LSTM
  • condition
    • Only coin 9 data use
    • Data preprocess - simple exponential smoothing
    • LSTM layer is 1

  • pytorch LSTM python code
class LSTM(nn.Module):
    
    def __init__(self, input_dim, hidden_dim, output_dim, num_layers, dropout, use_bn):
        super(LSTM, self).__init__()
        self.input_dim = input_dim 
        self.hidden_dim = hidden_dim
        self.output_dim = output_dim
        self.num_layers = num_layers

        self.dropout = dropout
        self.use_bn = use_bn 
        self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.num_layers)

        self.regressor = self.make_regressor()
        
    def init_hidden(self, batch_size):
        return (torch.zeros(self.num_layers, batch_size, self.hidden_dim),
                torch.zeros(self.num_layers, batch_size, self.hidden_dim))
    
    def make_regressor(self):
        layers = []
        if self.use_bn:
            layers.append(nn.BatchNorm1d(self.hidden_dim))
        layers.append(nn.Dropout(self.dropout))
        
        layers.append(nn.Linear(self.hidden_dim, self.hidden_dim))
        layers.append(nn.ReLU())
        layers.append(nn.Linear(self.hidden_dim, self.output_dim))
        regressor = nn.Sequential(*layers)
        return regressor
    
    def forward(self, X):
        lstm_out, self.hidden = self.lstm(X)
        y_pred = self.regressor(lstm_out[-1].view(X.shape[1], -1))
        return y_pred
  • ๋ชจ๋ธ ํ•™์Šต ๋ฐฉ๋ฒ• ์‹œ๊ฐํ™”


Figure. Multistep LSTM modeling(source by : tensorflow tutorial)


result

modeling ๋‚ด ํ•œ๋ฒˆ์— 120๊ฐœ์˜ y๊ฐ’์„ ์ถœ๋ ฅ์‹œ, ๋‹ค์Œ ๊ทธ๋ฆผ์ฒ˜๋Ÿผ ํŒจํ„ด์— ์ƒ๊ด€์—†์ด ๊ฐ™์€ y๊ฐ’์„ ์ถœ๋ ฅํ•˜๊ฒŒ ๋จ. -> ์‹คํŒจ


Figure. coin9 LSTM prediction


Colab link : https://colab.research.google.com/drive/1I0Arck8qkV4FTXnOOYMxkpZGIRKCGj7J?usp=sharing

Only coin 9, Slicing & LSTM

์ดํ›„, ํ•œ ์ƒ˜ํ”Œ ๋‚ด ๋ฐ์ดํ„ฐ๋ฅผ slicing ํ•ด์„œ ๊ณผ๊ฑฐ 120 time-series ๋กœ ์ดํ›„ 120 time-series๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ๋กœ ๋ณ€ํ˜•ํ•ด๋ดค์ง€๋งŒ ์‹คํŒจ.

  • one sample data slicing python code
class WindowGenerator():
    ''' Dataset Generate'''
    def __init__(self, input_width, label_width, stride, data_arr, column_indices = column_indices,
                 shfit = None, label_columns=None):
    
        # Store the raw data
        self.data_arr = data_arr
        # Work out the label column indices.
        self.label_columns = label_columns
        if label_columns is not None:
            self.label_columns_indices = {name: i for i, name in enumerate(label_columns)}
        self.column_indices = column_indices
                
        # Work out the window parameters.
        self.input_width = input_width
        self.label_width = label_width
        self.shift = 1
        if shfit is not None:
            self.shift = shfit
        self.stride = stride
        
        self.label_start = self.input_width + self.shift
        self.total_window_size = self.label_start + self.label_width
        
        # input, label indices
        self.input_slice = slice(0, self.input_width)
        self.input_indices = np.arange(self.total_window_size)[self.input_slice]
        
        self.labels_slice = slice(self.label_start, None)
        self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
        
        self.X_arr, self.y_arr = self.split_windows()
        
    def __repr__(self):
        return '\n'.join([
            f'Total window size: {self.total_window_size}',
            f'Input indices: {self.input_indices}',
            f'Label indices: {self.label_indices}',
            f'Label column name(s): {self.label_columns}'
        ])

    def split_windows(self):

        X, y = list(), list()
        sample_length = int(self.data_arr.shape[0])
        split_length = int((self.data_arr.shape[1] - self.total_window_size)/self.stride) + 1
        
        for temp_id in range(sample_length):
            for i in range(split_length):
                
                X.append(self.data_arr[temp_id, (i*self.stride) : (i*self.stride)+self.input_width])
                y.append(self.data_arr[temp_id, (i*self.stride)+self.label_start : (i*self.stride)+self.total_window_size])

        return np.array(X), np.array(y)

    def __len__(self):
        return len(self.X_arr)

    def __getitem__(self, idx):
        
        X = self.X_arr[idx, :, :]
        y = self.y_arr[idx, :, :]

        return X, y

Colab link : https://colab.research.google.com/drive/11s1KCtT8NPvsaOR-1mYaR66lneQ1yxU7?usp=sharing

All coin, Log norm & LSTM

์ดํ›„, ๋ชจ๋“  ์ฝ”์ธ์œผ๋กœ ํ™•์žฅํ•ด์„œ ์žฌ์ ์šฉ ์‹œ๋„

  • condition
    1. all coin data use
    2. No data preprocess
    3. log normalization
    4. LSTM layer is 1

result

์ด์ „ ๋ชจ๋ธ ๊ตฌ์กฐ๋Š” ๊ทธ๋Œ€๋กœ ์ง„ํ–‰ํ•˜์˜€๊ณ , ๋ฐ์ดํ„ฐ ์„ธํŠธ๋งŒ ๊ธฐ์กด์— 9๋ฒˆ ์ฝ”์ธ ๋ฐ์ดํ„ฐ์—์„œ ์ฝ”์ธ์— ์ƒ๊ด€์—†์ด ๋ชจ๋“  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ ์šฉํ•จ. -> ์–ด์ฐจํ”ผ ์ฝ”์ธ๋ณ„ ๊ฐ๊ธฐ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ํŒจํ„ด ์–‘์ƒ์ด ์•„๋‹Œ, ์ฝ”์ธ๊ณผ ๋ฌด๊ด€ํ•˜๊ฒŒ ๊ฐ€๊ฒฉ ๋ฐ์ดํ„ฐ์˜ ์›€์ง์ž„์€ ๊ทธ๋ƒฅ ์ผ์ • ํŒจํ„ด ๋ณ„๋กœ ๋‚˜๋‰  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ -> ์ œ๋Œ€๋กœ ๋œ ํ•™์Šต์กฐ์ฐจ ์•ˆ ๋˜๋ฉฐ, ์ฝ”์ธ ๊ฐ€๊ฒฉ ์ž์ฒด๊ฐ€ seasonal์•„ ์•„์˜ˆ ์—†์–ด์„œ LSTM regression์ด ๋ฌด์˜๋ฏธํ•œ ๊ฒƒ์œผ๋กœ ํŒ์ •


Figure. LSTM prediction with all coin data


Colab link : https://colab.research.google.com/drive/1blDNKqxy6GvTkR-rq8pjn9eUL0IUpShi?usp=sharing

All coin, outlier remove & LSTM

์ดํ›„, all coin regression์‹œ, ์‹œ์ฆˆ๋„์ด ์•„์˜ˆ ๋ฒ—์–ด๋‚˜๋Š” ๋ฐ์ดํ„ฐ๋“ค ๋•Œ๋ฌธ์— ํ•™์Šต์ด ์•ˆ๋˜๋Š” ๊ฒƒ์ด๋ผ ํŒ๋‹จํ•˜์—ฌ y series ์ค‘์—์„œ min-max range๊ฐ€ ํŠน์ • ๊ธฐ์ค€(outlier criteria)๊ฐ€ ๋„˜๋Š” ์ƒ˜ํ”Œ์€ ์ œ์™ธํ•˜๊ณ  ํ•™์Šต ์‹œ๋„

  • outlier remove python code
def outlier_detecter(raw_y_arr, outlier_criteria = 0.03):

    open_arr = raw_y_arr[:, :, 1] #open col is 1

    outlier_list = []
    openrange_list = []

    for idx, temp_arr in enumerate(open_arr):
    
        temp_min = temp_arr.min()
        temp_max = temp_arr.max()
        temp_arr_range = temp_max - temp_min
        openrange_list.append(temp_arr_range)

        if temp_arr_range > outlier_criteria:
            outlier_list.append(idx)
            print(f'{idx}๋ฒˆ์งธ open series is outlier sample!')
            print(f'temp array range is {temp_arr_range:.3}\n')
            

    return outlier_list, np.array(openrange_list)


Figure. outlier remove & LSTM prediction


All coin, kbinsdiscretize & LSTM

outlier๋ฅผ ๊ตณ์ด ์ œ๊ฑฐํ•˜์ง€ ์•Š๊ณ , ์˜ˆ์ธกํ•ด์•ผ ํ•  y series๋ฅผ ๊ณ„์ธตํ™”์‹œ์ผœ์„œ ํŒจํ„ด์„ ํ•™์Šต์‹œํ‚ค๋ฉด ๋ช‡ ๊ฐœ์˜ ํŒจํ„ด์œผ๋กœ ์˜ˆ์ธก๊ฐ€๋Šฅํ•  ๊ฒƒ์œผ๋กœ ์ƒ๊ฐํ–ˆ์œผ๋‚˜ ์‹คํŒจ -> ์ด๋Ÿฐ ๋ฐฉ๋ฒ•์œผ๋กœ ์ ์šฉ์‹œ, classification๋ฌธ์ œ๋กœ ๋ณ€ํ˜•๋˜์–ด ํ‘ธ๋Š” ๊ฒƒ์ด ๋งž์„ ๋“ฏ ์‹ถ์Œ.

  • kbindiscretize python code
def kbin_discretizer(input_array):

    kb = KBinsDiscretizer(n_bins=10, strategy='uniform', encode='ordinal')
    processed_data = np.zeros((input_array.shape[0], input_array.shape[1], 1))
    
    for i in range(input_array.shape[0]):
        # coin_index_export args : (input_array, coin_num)
        globals()['processing_array{}'.format(i)] = input_array[i,:,1]
        
        #globals()['outliery_array{}'.format(i)] = train_y_array[outlier[i],:,1]
        kb.fit(globals()['processing_array{}'.format(i)].reshape(input_array.shape[1],1))
        globals()['processed_fit{}'.format(i)] = kb.transform(globals()['processing_array{}'.format(i)].reshape(input_array.shape[1],1))
        
        #globals()['outliery_fit{}'.format(i)] = kb.transform(globals()['outliery_array{}'.format(i)].reshape(120,1))
        processed_data[i,:,:] = globals()['processed_fit{}'.format(i)]
        
    return processed_data
All coin, log norm & Conv1d-LSTM(๋ชจ๋ธ ๋ณ€๊ฒฝ)

๊ธฐ์กด LSTM ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” ๋„ˆ๋ฌด ๋ฐ์ดํ„ฐ ์‹œํ€€์Šค๊ฐ€ ๊ธธ์–ด์„œ(lstm time sequence length = 1380)๋ชจ๋ธ์ด ํ•™์Šตํ•˜๊ธฐ ์–ด๋ ต๋‹ค๊ณ  ์ƒ๊ฐํ•˜์—ฌ, Conv1d๋กœ ํŠน์ • ๊ตฌ๊ฐ„์”ฉ splitํ•˜์—ฌ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ณ  ์ด๋ฅผ LSTM์— ๋ฐ˜์˜ํ•˜๋ฉด ํ•™์Šต์ด ๊ฐ€๋Šฅํ•ด์งˆ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ƒ๊ฐํ•˜์—ฌ driving

  • Conv1d-LSTM modeling code
class CNN_LSTM(nn.Module):
    
    def __init__(self, input_dim, hidden_dim, output_dim, n_layers):
        super(CNN_LSTM, self).__init__()
    
        self.input_dim = input_dim
        self.hidden_dim = hidden_dim
        self.output_dim = output_dim
        self.num_layers = n_layers

        self.conv1 = nn.Conv1d(args.input_dim, args.hidden_dim, kernel_size = 10)
        self.pooling1 = nn.MaxPool1d(2, stride = 5)
        self.conv2 = nn.Conv1d(args.hidden_dim, args.hidden_dim // 2, kernel_size = 5)
        self.pooling2 = nn.MaxPool1d(4, stride = 4)
        
        self.norm = nn.BatchNorm1d(32)
        
        self.lstm = nn.LSTM(32, 128, self.num_layers, batch_first = True, bidirectional = True)
        self.linear = nn.Linear(256, args.output_dim)
        self.flatten = nn.Flatten()
        
    def init_hidden(self, batch_size):
        return (torch.zeros(self.num_layers, batch_size, self.hidden_dim),
                torch.zeros(self.num_layers, batch_size, self.hidden_dim))
    
    
    def forward(self, X):
        
        # input์€ (Batch, Feature dimension, Time_step)์ˆœ
        output = F.relu(self.conv1(X))
        output = self.pooling1(output)
        output = F.relu(self.conv2(output))
        output = self.pooling2(output)
        # output = self.flatten(output)

        # [Batch_size, Seq_len, Hidden_size]
        # x_input.reshape(1, -1, self.output_dim
        # torch.Size([16, 32, 135])
        # torch.Size([16, 135, 32])
        
        output, self.hidden = self.lstm(output.reshape(args.batch_size, -1, 32))
        y_pred = self.linear(output[:, -1, :])
        
        return y_pred

Figure. Conv1d-LSTM prediction


RNN ๋ชจ๋ธ๋ง ๊ฒฐ๋ก 

  1. normalization์ด๋‚˜ smoothing์˜ ๋ฌธ์ œ๊ฐ€ ์•„๋‹Œ, ์• ์ดˆ์— ๋ฐ์ดํ„ฐ๊ฐ€ ์ฃผ๊ธฐ์„ฑ์ด ์—†์–ด์„œ ์ƒ˜ํ”Œ ๋ณ„๋กœ ๋ฐ์ดํ„ฐ๋ฅผ regression ํ•˜๋Š” ๋ฐฉ๋ฒ•์˜ ๋ฐฉํ–ฅ์ด ํ‹€๋ฆผ.
  2. LSTM๊ณผ ๊ฐ™์€ RNN ๊ณ„์—ด์˜ ๋ชจ๋ธ๋“ค์€ ํŒจํ„ด์„ ํ•™์Šตํ•˜๋Š” ๊ฒƒ์œผ๋กœ, onestep์ด ์•„๋‹Œ multistep์—์„œ๋Š” ๋„ˆ๋ฌด ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•˜๊ฒŒ ๋จ.
  3. ํ•ด๋‹น ๋ฌธ์ œ๋ฅผ ํŠน์ • ํŒจํ„ด์„ ํ•™์Šตํ•˜๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š”, discretize์‹œ์ผœ์„œ classification ๋ฌธ์ œ๋กœ ์ ‘๊ทผํ•˜๋Š” ๊ฒƒ๋„ ํ•˜๋‚˜์˜ ๋ฐฉ๋ฒ•(ํ–ฅํ›„ ์‹œ์ฆŒ3์—์„œ ๊ฒ€ํ† )
  4. regression ๋ฌธ์ œ๋กœ ํ’€๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ผ๋ฐ˜์ ์ธ time-series forcasting ๋ชจ๋ธ(ARIMA or Prophet)์ฒ˜๋Ÿผ ํ•œ ์ƒ˜ํ”Œ ๋‚ด open data seriees๋ฅผ ๊ฐ€์ง€๊ณ  onestep์”ฉ ํ•™์Šต ํ›„ ์ด๋ฅผ target length(120min)๋งŒํผ loopํ•˜์—ฌ ์‹œ๋„ํ•ด์•ผ ํ•จ(ํ–ฅํ›„ ์‹œ์ฆŒ3์—์„œ ๊ฒ€ํ† )

Chapter. 6 - Experiments & Simulation

Experiment list

โ€ป ํ˜น์‹œ ๊ฒฝ๋กœ๋ณ€๊ฒฝ์œผ๋กœ ๋งํฌ๊ฐ€ ์•ˆ ์—ด๋ฆด ์ˆ˜๋„ ์žˆ์–ด์„œ ํด๋” ๋งํฌ๋ฅผ ๋”ฐ๋กœ ๋‚จ๊น๋‹ˆ๋‹ค. Colabs ๋…ธํŠธ๋ถ ํด๋” ๋งํฌ :https://drive.google.com/drive/folders/1UNQQqKb_b2bhm7vpyjj_WZbtko4LFuBY?usp=sharing

Simulation program

Coin investing simulator code : here


์ดํ›„ ์‹œ์ฆŒ 3 ์ง„ํ–‰ ๋ฐฉํ–ฅ

modeling - Data dicretize & classification driving
  • ํ•ด๋‹น ๋ฌธ์ œ๋ฅผ ์ตœ๊ณ ์  ํŒจํ„ด ๋ถ„๋ฅ˜ ๋ชจ๋ธ๋กœ ๋ณ€ํ˜•ํ•˜์—ฌ ํ•™์Šต
  • y๊ฐ’ ๊ตฌ๊ฐ„ ๋‚ด open price ์ตœ๊ณ ์ ์„ labling
  • ๋ชจ๋ธ์€ 1380๋ถ„ ๊ฐ„์˜ input ํŒจํ„ด์— ๋”ฐ๋ผ, ์ตœ๊ณ ์ ์ธ lable์„ ๋ถ„๋ฅ˜
  • Pytorch Conv1d + bidirectional LSTM

modeling - Open data series regression by one sample
  • ์ผ๋ฐ˜์ ์ธ time-series forcasting ๋ชจ๋ธ(ARIMA or Prophet)์ฒ˜๋Ÿผ ํ•œ ์ƒ˜ํ”Œ ๋‚ด open data seriees๋ฅผ ๊ฐ€์ง€๊ณ  onestep์”ฉ ํ•™์Šต ํ›„ ์ด๋ฅผ target length(120min)๋งŒํผ looping
  • smoothing ๋ฐ fractional differecing, log normalization ์ ์šฉ
  • moving average ์žฌ์ ์šฉ
  • ํŠน์ • ๋ถ„๋ฅ˜ ๋ถˆ๊ฐ€ํ•  ๊ฒƒ ๊ฐ™์€ outlier data sample remove
  • Pytorch Conv1d + bidirectional LSTM

๋ฉ”๋ชจ
  • Residual modeling

์‹œ๊ณ„์—ด ๋ถ„์„์—์„œ๋Š” ๋‹ค์Œ ๊ฐ’์„ ์˜ˆ์ธกํ•˜๋Š” ๋Œ€์‹  ๋‹ค์Œ ํƒ€์ž„์Šคํ…์—์„œ ๊ฐ’์ด ์–ด๋–ป๊ฒŒ ๋‹ฌ๋ผ์ง€๋Š” ์ง€๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ์„ ๋นŒ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋”ฅ๋Ÿฌ๋‹์—์„œ "์ž”์—ฌ ๋„คํŠธ์›Œํฌ(Residual networks)" ๋˜๋Š” "ResNets"๋Š” ๊ฐ ๋ ˆ์ด์–ด๊ฐ€ ๋ชจ๋ธ์˜ ๋ˆ„์  ๊ฒฐ๊ณผ์— ์ถ”๊ฐ€๋˜๋Š” ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๋ณ€ํ™”๊ฐ€ ์ž‘์•„์•ผ ํ•œ๋‹ค๋Š” ์‚ฌ์‹ค์„ ์ด์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค.


  • Gloden cross strategy

์˜ˆ์ธก๋œ ๊ตฌ๊ฐ„ ๋‚ด ๊ณจ๋“ ํฌ๋กœ์Šค๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค๊ณ  ์˜ˆ์ธก์ด ๋˜๋ฉด 1381๋•Œ 1๋กœ ๊ตฌ๋งค ์•„๋‹ˆ๋ฉด 0 ์œผ๋กœ ํŒจ์Šค ๊ทธ๋ฆฌ๊ณ  ๊ตฌ๋งค์ดํ›„ ๋ฐ๋“œํฌ๋กœ์Šค๊ฐ€ ๋ฐœ์ƒ ์‹œ ์ „๋ถ€ ํŒ๋งคํ•˜๋ฉด ์•ˆ์ •์ ์ธ ์ „๋žต ๊ฐ€๋Šฅ -> ๊ฐ€๋Šฅํ•  ์ง€๋Š” ์˜๋ฌธ

  • Simply Classification

์ตœ๊ณ ์  y ๊ฐ’ lable์„ 120์ด ์•„๋‹Œ, ํ•ด๋‹น time-length๋ฅผ ์••์ถ• ๋˜ํ•œ, ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋กœ ํ’€ ๊ฒฝ์šฐ ๋ฌด์กฐ๊ฑด ์‚ฌ์ง€ ๋ง๊ณ  ๋งž์ถœ ํ™•๋ฅ ์— cap์„ ์”Œ์›Œ์„œ ๊ฐ€๋Šฅ์„ฑ ๋†’์€ ๊ฑฐ๋งŒ ์‚ฌ๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ํ™•์ธ

  • Conv1d

1d CNN์€ ๋” ํฐ filter size๋ฅผ ์จ๋„ ๋œ๋‹ค. 1d CNN์€ ๋” ํฐ window size๋ฅผ ์จ๋„ ๋œ๋‹ค. filter size๋กœ ์ผ๋ฐ˜์ ์œผ๋กœ 7 or 9๊ฐ€ ์„ ํƒ๋œ๋‹ค.

Reference

  1. Time-Series Forecasting: NeuralProphet vs AutoML: https://towardsdatascience.com/time-series-forecasting-neuralprophet-vs-automl-fa4dfb2c3a9e

  1. Techniques to Handle Very Long Sequences with LSTMs : https://machinelearningmastery.com/handle-long-sequences-long-short-term-memory-recurrent-neural-networks/

    A reasonable limit of 250-500 time steps is often used in practice with large LSTM models.

  2. Neural prophet baseline : https://dacon.io/codeshare/2492


  1. ์˜ˆ๋ณด ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ์™€ ์„ ํ˜•๋ณด๊ฐ„ : https://dacon.io/competitions/official/235720/codeshare/2499?page=1&dtype=recent

    ํŠธ๋ Œ๋“œ ์ถ”์ถœํ•ด์„œ interpolation ์ฒ˜๋ฆฌ ํ›„ ๋ฐ˜์˜ ๋ฐฉ๋ฒ•

  2. ARIMA ์›๋ฆฌ ์„ค๋ช…: https://youngjunyi.github.io/analytics/2020/02/27/forecasting-in-marketing-arima.html


  1. facebook prophet : https://facebook.github.io/prophet/docs/quick_start.html#python-api

    prophet changepoint range์˜ ์˜๋ฏธ 100%์œผ๋กœ ํ•˜๋ฉด ์˜ค๋ฒ„ํ”ผํŒ… ๋˜๊ธด ํ• ๋“ฏ, By default changepoints are only inferred for the first 80% of the time series in order to have plenty of runway for projecting the trend forward and to avoid overfitting fluctuations at the end of the time series. This default works in many situations but not all, and can be changed using the changepoint_range argument.

    prophet changepoint_prior_scale์˜ ์˜๋ฏธ ์ด์ƒ์น˜๋ฐ˜์˜์ •๋„? ๊ฐ™์€ ๋Š๋‚Œ, If the trend changes are being overfit (too much flexibility) or underfit (not enough flexibility), you can adjust the strength of the sparse prior using the input argument changepoint_prior_scale. By default, this parameter is set to 0.05

  2. Cryptocurrency price prediction using LSTMs | TensorFlow for Hackers (Part III) : https://towardsdatascience.com/cryptocurrency-price-prediction-using-lstms-tensorflow-for-hackers-part-iii-264fcdbccd3f


  1. tensorflow time-series forecasting tutorial : https://www.tensorflow.org/tutorials/structured_data/time_series?hl=ko

  1. ์‹œ๊ณ„์—ด ์˜ˆ์ธก ํŒจํ‚ค์ง€ Prophet ์†Œ๊ฐœ : https://hyperconnect.github.io/2020/03/09/prophet-package.html

  1. fourie order meaning in prophet : https://medium.com/analytics-vidhya/how-does-prophet-work-part-2-c47a6ceac511

    m.add_seasonality(name='first_seasonality', period= 1/24 , fourier_order = 7) 1/24 ๊ฐ€ 1์ผ์„ 24๋“ฑ๋ถ„ํ•ด์„œ 1์‹œ๊ฐ„ ๋งˆ๋‹ค์˜ ์‹œ์ฆˆ๋„์„ ์ž…ํžˆ๋Š” ๊ฒƒ m.add_seasonality(name='second_seasonality', period=1/6, fourier_order = 15) 1/6 ํ•˜๋ฉด 1์ผ์„ 6๋“ฑ๋ถ„ํ•ด์„œ 4์‹œ๊ฐ„ ๋งˆ๋‹ค์˜ ์‹œ์ฆˆ๋„์„ ์ž…ํžˆ๋Š” ๊ฒƒ

  2. [ML with Python] 4.๊ตฌ๊ฐ„ ๋ถ„ํ• /์ด์‚ฐํ™” & ์ƒํ˜ธ์ž‘์šฉ/๋‹คํ•ญ์‹ - https://jhryu1208.github.io/data/2021/01/11/ML_segmentation/


  1. A Simple LSTM-Based Time-Series Classifier : https://www.kaggle.com/purplejester/a-simple-lstm-based-time-series-classifier

  1. PyTorch RNN ๊ด€๋ จ ํ‹ฐ์Šคํ† ๋ฆฌ ๋ธ”๋กœ๊ทธ : https://seducinghyeok.tistory.com/8

  1. [PyTorch] Deep Time Series Classification : https://www.kaggle.com/purplejester/pytorch-deep-time-series-classification/notebook

  1. PyTorch๋กœ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ ๋Ÿฌ๋‹ ์ž…๋ฌธ wicidocs : https://wikidocs.net/64703

  1. scikit-learn kbins docs : https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.KBinsDiscretizer.html

  1. Pytorch๋กœ CNN ๊ตฌํ˜„ํ•˜๊ธฐ ํ‹ฐ์Šคํ† ๋ฆฌ ๋ธ”๋กœ๊ทธ : https://justkode.kr/deep-learning/pytorch-cnn

  1. CNN์„ ํ™œ์šฉํ•œ ์ฃผ๊ฐ€ ๋ฐฉํ–ฅ ์˜ˆ์ธก : https://direction-f.tistory.com/19

  1. Bitcoin Time Series Prediction with LSTM : https://www.kaggle.com/jphoon/bitcoin-time-series-prediction-with-lstm

  1. ์‹œ์ฆŒ 1, CNN ๋ชจ๋ธ ํŒ€ : https://dacon.io/competitions/official/235740/codeshare/2486?page=1&dtype=recent

  1. A Gentle Introduction to Exponential Smoothing for Time Series Forecasting in Python : https://machinelearningmastery.com/exponential-smoothing-for-time-series-forecasting-in-python/

  1. statsmodels docs : https://www.statsmodels.org/stable/examples/notebooks/generated/exponential_smoothing.html

About

๐Ÿ’ฒ DACON's AI bitcoin trader competition season 2

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published