I am preparing a program in VHDL and I got stuck in type-conversion. I tryed google-search and also here at stack exchange, but I am quite confused since one answer contradicts other and neither one I can get to work. But finally to the question of mine:
I have to display ordinary digital clocks. I have got from my professor decoder for 7 segment display which takes as input
std_logic_vector(3 downto 0). And my part is to provide data (more specifically set of digits) to display. Most of work is already done but I am struggling to convert from my variables e.g. minutes (it is
std_logic_vector(4 downto 0)) to two decimal digits. I have done separation this way:
if (min >= 50) then d3 <= 5; d4 <= min - 50; elsif (min >= 40) then d3 <= 4; d4 <= min - 40; elsif ... end if;
(Maximal value of
min is 59 and if it get this high, it resets to 0 and hours counter get +1, just like ordinary clocks). Thus
d3 should display tens of minutes and
d4 unites of minutes. As you probably had already guessed, since
min has length of 5, I can not compile it (
d4 accepts length of 4 tops). Thus my idea is to convert
min to integer, subtract number (e.g. 50) and convert back to
std_logic_vector with right length (actual value would be in range 0 to 9, thus no problem with length of four bits). I tried to preform something like this:
d4 <= std_logic_vector(unsigned(integer(unsigned(min)) - 50));
but without success. I had always ended with errors like unknown function, type mismatch or no matching overload for method, no matter what combination I tried. Apparently I got somewhere some trivial error, but I fail to see it.
I use those:
library ieee; use ieee.numeric_std.all; use ieee.std_logic_1164.all; use ieee.std_logic_arith.all; use ieee.std_logic_unsigned.all;
Please can you point me to the right direction? I am running out of ideas.
Thank you very much for your time,
Inputs, outputs, variables declarations as requested are:
port (sw2, sw1, sw0: in std_logic; -- input switches clock: in std_logic; reset: in std_logic; d0, d1, d2, d3, d4, d5, d6, d7: out std_logic_vector(3 downto 0); -- outputs for eight units of 7segment display decoders dp0, dp1, dp2, dp3, dp4, dp5, dp6, dp7: out std_logic); -- outputs for decimal points attribute loc : string; attribute loc of sw0 : signal is "P7"; attribute loc of sw1 : signal is "P9"; attribute loc of sw2 : signal is "P10"; attribute loc of clock : signal is "P2"; attribute loc of reset : signal is "P11"; end main_1048; architecture main_1048arch of main_1048 is begin process(sw2, sw1, sw0, clock, reset) variable yrs: std_logic_vector(11 downto 0); variable min, sec: std_logic_vector(4 downto 0); variable hrs, day, mth: std_logic_vector(3 downto 0); begin
After deleting libs
std_logic_unsigned as sugested by Brian Drummond and redefining variables so they are now
naturals. New errors were No matching overload for "-". According to the specifications I found, "-" should work:
function "-" (L: NATURAL; R: UNSIGNED) return UNSIGNED; -- Result subtype: UNSIGNED(R'LENGTH-1 downto 0). -- Result: Subtracts an UNSIGNED vector, R, from a non-negative INTEGER, L.
Thus final edit was to
d4 <= std_logic_vector(min - to_unsigned(50,4)); (with declaration
variable min: natural range 0 to 59;) and this (concerning typecast) works.
Thank you all for your help!