Well, a good candidate for testing would be some method that is sufficiently complex that it might have logic bugs.
(If your data model doesn't have ANY complex methods, maybe you should think about whether there is some non-UI code in your UI classes that could be moved over into your data model.)
You might have a TerrainMap.resize() method that changes the size of the terrain map, and you might want to test that the resized map still has the same elevation data in it as the old one (in the tile locations that weren't cropped by the resizing operation).
Some teams are handling whether or not tiles are currently "selected" in their data model. You might be able to test a process of selecting multiple tiles (or a rectangle of tiles?), and then seeing whether the correct tile are selected, or maybe you have methods that can perform changes ("set elevation") on all selected tiles, and you could test if the resulting grid was modified appropriately?
This is pretty simple logic, but still might be worth testing -- maybe your Tile or Grid class is enforcing that the elevation never goes below 0 or above some maximum elevation. Your test code could try "raising" or "lowering" the elevation a bunch of times, and make sure that it stays within the valid range.
You could test saving/loading a map. Create a small map using code and modify the elevation of some tiles (and maybe other things, like whether the tile is "pointy"?). Then call your save to json method to save it to a temporary file somewhere. Then load the file into a new grid object, and test whether the new grid object has the same data as the original.
You could test your OBJ file exporting code, by again exporting a simple programmatically constructed map, and then test whether the resulting OBJ file contains the correct/expected text data.
If your GameBoard class has a toString() method for debugging purposes, you could even test that to make sure that it's working properly.
It's hard to test random terrain generators, because the results are different each time. However, sometimes you can still come up with test that are useful. For example, if your terrain generator is supposed to generate a smooth terrain, you could test that two adjacent tiles are never more than X units of height different from each other. Or, in theory, you could test that the mazes your maze-making algorithm generates are solvable from start to finish (but that would be a hard algorithm to write, and probably not worth the effort in this case).
To be clear, you do NOT need to create ALL of these tests I'm suggesting -- each team project only needs to have a few unit tests that make sense for your project, demonstrating that you know to incorporate some automated testing.