Trouble with https proxy to heatermeter


 

Steve_M

TVWBB Guru
This used to work, so this might be related to an nginx upgrade, but I'll see if anyone has any advice.

I use nginx as an ssl/https proxy to anything on my network.

Using chrome, I can view the main heatermeter page, but I'm unable to complete the login process. When I enter my password and click on Login, I'm redirected back to the login page, but I go get the stok ID in the url ie: https://bbq.converged.ca/luci/;stok=36feae122bd72dc14ece0416aaf64f93/admin/lm

Now for a twist, if I use safari, I can fully login and get to the config page. When using chrome, I'm redirected back to the "Authorization Required" login page.

Another odd thing is that if I switch this config to be just port 80/http, it works fine, so something is getting fubared when trying to login over https.

my nginx config for my heatermeter:

Code:
server {
        listen                  443 ssl http2;
        server_name             bbq.converged.ca;
        ssl_certificate         /etc/nginx/letsencrypt/certs/converged/fullchain.pem;
        ssl_certificate_key     /etc/nginx/letsencrypt/certs/converged/privkey.pem;

        location / {
        proxy_pass              http://192.168.1.178;
        proxy_http_version      1.1;
        proxy_buffering         off;
        }
}
 
I rolled back to the packaged version of nginx on raspbian jessie (v1.6.2-5) and I'm now able to access the config screen over https again. I had compiled and installed nginx v1.9.9 to gain http2 support.
 
I think I've stumbled upon something. The spec for SPDY and HTTP2 mandate that all header field names be converted to lowercase.

When HTTP2 is enabled, nginx is sending "cookie: sysauth=<string>; _ga=GA1.2.736033083.1454942811" to the luci web server. I think luci is looking for "Cookie:" and is ignoring "cookie:"

Well, this isn't the case because it still fails if you don't have a cookie set. I think the issue is with "Content-Type" header.

Looking at the protocol.lua in the luci source we see:

Code:
                                CONTENT_TYPE      = msg.headers['Content-Type'] or msg.headers['Content-type'];
 
Last edited:
I was close, but now I've got it fixed.

It's /usr/lib/lua/luci/lucid/http/server.lua that needed to be fixed up. The nice thing is that this file is present on the filesystem and can be edited without having to fully recompile linkmeter.

In the "local hdr2env" section I added:

["content-type"] = "CONTENT_TYPE",

and

["cookie"] = "HTTP_COOKIE",

This now allows me to get to the config screen.

I'll add an all lowercase entry for all of the items and create yet another luci patch file!

Code:
local hdr2env = {                                                                 
        ["Content-Length"] = "CONTENT_LENGTH",                       
        ["content-length"] = "CONTENT_LENGTH",                                    
        ["Content-Type"] = "CONTENT_TYPE",                                        
        ["Content-type"] = "CONTENT_TYPE",                                        
        ["content-type"] = "CONTENT_TYPE",                                        
        ["Accept"] = "HTTP_ACCEPT",                                               
        ["accept"] = "HTTP_ACCEPT",                                  
        ["Accept-Charset"] = "HTTP_ACCEPT_CHARSET",                               
        ["accept-charset"] = "HTTP_ACCEPT_CHARSET",                               
        ["Accept-Encoding"] = "HTTP_ACCEPT_ENCODING",                             
        ["accept-encoding"] = "HTTP_ACCEPT_ENCODING",                             
        ["Accept-Language"] = "HTTP_ACCEPT_LANGUAGE",                             
        ["accept-language"] = "HTTP_ACCEPT_LANGUAGE",                
        ["Connection"] = "HTTP_CONNECTION",                                       
        ["connection"] = "HTTP_CONNECTION",                                       
        ["Cookie"] = "HTTP_COOKIE",                                               
        ["cookie"] = "HTTP_COOKIE",                                               
        ["Host"] = "HTTP_HOST",                                         
        ["host"] = "HTTP_HOST",                                                   
        ["Referer"] = "HTTP_REFERER",                                             
        ["referer"] = "HTTP_REFERER",                                             
        ["User-Agent"] = "HTTP_USER_AGENT",                                       
        ["user-agent"] = "HTTP_USER_AGENT"                                        
}
 
Last edited:
Huh! Shows what I know, I always thought the headers were case sensitive and that was why they were always camel case like that. Good work Steve!

Have you looked at luci trunk and see if it is fixed there? If not you may want to see about getting it in there so when we bump up to the latest openwrt version for v14 that is one less patch to maintain.
 
Looks like the lucid daemon has been deprecated and the main web server is now uhttpd, which does look to have been patched for lowercase header fields.

The spec for HTTP/1.1 states that header fields are to be treated as case-insensitive. In HTTP/2, it states that they must be converted to lowercase.
 
Last edited:
I knew lucid was deprecated but for some reason didn't make the connection that your changes wouldn't be needed because of it. It is one on the main reasons we can't just jump to the new version of openwrt!

Why can't uhttpd use the cert directly? What does ngnix add to the equation? I've got limited Internet access the next couple weeks so if you could fill me in I'd appreciate it :-D
 
I see nginx as offering the best of both worlds. It's both high performance and requires very little resources. I was looking at the uhttpd configs for enabling ssl and it's pretty messy compared to nginx. My thought process is let uhttpd be an openwrt http server and use nginx as a reverse proxy for both http and https to uhttpd. There's also lots of documented configs for using letsencrypt with nginx. This will let even people using DDNS servers to obtain a cert.

Also, by using nginx as the primary web server + reverse proxy, you could split the linkmeter display and config into different applications.

Something like:

Code:
  location / {
    alias /var/www/linkmeter/;
    gzip_static on;
  }

  location /lm/config/ {
    proxy_pass http://localhost:8080;
    proxy_http_version 1.1;
  }
}
 

 

Back
Top