In linear models (like linear regression, logistic regression), the magnitude of the coefficients can be an indicator of feature importance.
Say there is a simple regression model as shown below:
$$ Income = \alpha+\beta(EducationLevel)+\theta(Sex) $$
The coefficients beta and theta could indicate the marginal change of the dependent variables when there is a unit change of the independent variables. Also, note that, independence of the variables is important, as excessive correlation between variables may deteriorate the interpretability of the model.
For example, for the linear regression model shown above, say if education level and sex have a strong positive correlation, the rise of education level will lead to the rise of health level as well. Therefore, the coefficients cannot correctly reflect importance of the feature.
Two different approaches
This involves randomly shuffling a single column of the validation dataset, leaving the target and all other columns in place. If the model's accuracy decreases significantly, it suggests the variable is important.

The diagram above illustrates the process as shown below:
Pro: Intuitive