The Black Hole of Data Collection
I walked in to a client's home to supervise one of our best RBTs. She had been with our company for about a year and was one of those techs that you want to hold on to; always arrived 5 minutes early, never cancelled sessions, brought a big bag with her own materials, and kept meticulous data during her sessions. I was excited to see how our client was progressing since the new interventions and made a beeline for the data sheets when I got there.
I looked at the data sheet she was using for that day and was surprised to see a lot of things scratched out and some missing blocks in the interval recording section. I looked at the data sheet from the day before, and saw that the interval data had been recorded, but mand data was completely blank. I switched over to the food desensitization program we were running and saw only one trial was recording in the past 3 days. What about the skill acquisition data? I looked and saw that only 50% of the trials had been run in the past week. The task analysis data sheet was complete blank on the next page. Something was very wrong. I looked up at the RBT, in the midst of a NET activity and saw her frantically switching between two interval timers and 3 clickers. She looked back at me, and I swear I thought she was going to cry.
I felt terrible. I had done what I promised myself I would never do. I had fallen down the black hole of data collection and interventions. I did a quick tally in my head; We had a DRO running for pica (1), momentary time sampling for hand flapping (2), we were tracking independent mands (3) and spontaneous mands (4) as well as taking frequency data on biting (5). The DRO for pica varied; if we were outside, it was a 2 minute interval and if we were inside it was a 5 minute interval. On top of that, we had 13 skill acquisition programs and a food desensitization program. There was an activity schedule, a task analysis for brushing teeth (oh yea, and we still had one for putting on pants and ... another one for washing hands). What had I done?
As any experienced behavior analyst will tell you, data is the cornerstone of what we do. Without data, we're nothing. That being said, our data is only as good as the person who's collecting it. The same goes for interventions. Having too many systems running simultaneously will impact the quality of that data as well as the effectiveness of our interventions.
I scheduled a supervision with this RBT the very next day and asked for her input on the interventions and data collection. What was working? What was too much? We simplified our data collection, and cut out the task analysis for putting on pants, at least until he mastered one of the others. We set a goal for independent mands and decided not to tally spontaneous mands until he met that goal. When I came back for a supervision the following week, everything was running much more smoothly. He met our goal for independent mands and we were able to begin tracking spontaneous mands exclusively. But more importantly, all of the data wall filled out, and the RBT looked much happier.
Critical BCBAs may say that these changes should have been made a while ago, and they'd be correct. They may jump down my throat about having 3 task analyses running simultaneously or want to know why MTS was even being collected. Again, they'd be justified in their criticism.
I think being humble and recognizing when we've made a mistake is important in this field. Sharing these mistakes is also essential for the development of our field. I hope other BCBAs can learn from my mistake and implement simpler data collection systems on their own teams. Maybe this will be a reminder to step back and look at the programs you're supervising. Maybe you'll realize how perfect and simplified they are... or maybe you'll decide to cut back on a program or intervention.