Part 3 done
made two csv files training and testing
binary encoded famhist used z score to normalize numeric features
then for the model added a bunch of stuff to stop overfitting leaky relu, L2, dropout, and early stopping
- more epochs
NOTE: Saved model achieved 76%+ on testing for the bonus mark
merging!