abstract = "Conservation machine learning conserves models across
runs, users, and experiments and puts them to good use.
We have previously shown the merit of this idea through
a small-scale preliminary experiment, involving a
single dataset source, 10 datasets, and a single
so-called cultivation method—used to produce the
final ensemble. focusing on classification tasks, we
perform extensive experimentation with conservation
random forests, involving 5 cultivation methods
(including lexigarden), 6 dataset sources, and 31
datasets. We show that significant improvement can be
attained by making use of models we are already in
possession of anyway, and envisage the possibility of
repositories of models (not merely datasets, solutions,
or code), which could be made available to everyone,
thus having conservation live up to its name,
furthering the cause of data and computational
science.",
notes = "HIBACHI, GAMETES
Institute for Biomedical Informatics, University of
Pennsylvania, Philadelphia, PA, 19104-6021, USA",