1. Create the set of lineups to be compared.
2. For each lineup, simulate the outcomes N games using that lineup.
3. The lineup that returns the best results (say, the highest winning %) is the lineup to go with.
It's in step 2, the simulation of a game, where the Markov chains come into play. Prior to each pitch in a game of baseball, the game is in a discrete, well-defined state. In any of these states, there is often a wealth of information you can use to estimate the probability of what will happen on the next pitch. For example:
2a. At the beginning of the game, you can look at the history of what happens when a particular pitcher throws to a particular batter in a 0-0 count with no outs and no one on base in the first inning***. Based upon that history, you estimate the probability of each possible outcome (ball, strike, hit, out, etc) and simulate the result of the first pitch.
2b. After the simulated first pitch (say it was a strike, 0-1), the game is again in a well-defined state. Based upon the history of what happens in that state, you can estimate the probability of each possible outcome (1-1, 0-2, hit, out, etc) and simulate the result of the second pitch.
2c. Keep doing this until you have simulated the entire game. The simulated game is a realization from a Markov chain defined by the lineup in play and the set of estimated probabilities that described how the game transitions from one state (6th inning, score tied, 1 on first, no outs, 1-1 count) to the next.
This methodology is extensible to other fields in order to do simulation studies of complex events.
*** In the ideal, you have a history of what happens when a particular batter faces a particular pitcher, but this can be supplemented by the individual histories of the batter and pitcher, and the overall league history, when you have a batter and pitcher who haven't faced one another often, or rookie players without much individual history.