Smaller models (7B down to 350M) can handle long conversations better
What are you basing that on? I mean, it is true there are more small models that support very long context lengths than big ones, but it's not really because smaller models can handle them better, but because training big models takes a lot more resources. So people usually do that kind of fine-tuning on small models since training a 70B to 32K would take a crazy amount of compute and hardware.
If you could afford fine tuning it though, I'm pretty sure the big model has at least the same inherent capabilities. Usually larger models deal with ambiguity and stuff better, so there's a pretty good chance it would actually do better than the small model assuming everything else was equal.
They actually did at one point, but they threw it all away.