2021-11-30

Converting React files to Typescript

I am in the process of learning Typescript and am converting my React project to TS. However, I've hit a bit of a roadblock and I'm not quite sure what to do with this file:

import { Grid } from '@material-ui/core';
import { useParams } from 'react-router-dom';
// @ts-ignore
import ProductImage from './ProductImage.tsx';
// @ts-ignore
import ProductInfo from './ProductInfo.tsx';

type Props = {
  products: {
    id: number;
    price: number;
    description: string;
    listing_type: string;
    image: string;
  }[];
  addToCart: (e: MouseEvent) => void;
  user: {
    id: number;
    isAuth: boolean;
  };
}

const Product: React.FC<Props> = ({ products, addToCart, user }) => {
  const { productId } = useParams<{productId: string}>()

  const product = products.find(product => product.id === parseInt(productId));

  return (
    <div>
      <Grid container spacing={1} style=>
        <Grid item sm={4}>
          <ProductImage image={product?.image} />
        </Grid>
        <Grid item sm={8}>
          <ProductInfo product={product} onClick={(e: React.MouseEvent<Element, globalThis.MouseEvent>) => addToCart(e)} user={user}/>
        </Grid>
      </Grid>
    </div>
  );
}

export default Product;

I added the @ts-ignore lines because of errors related to importing files ending with .tsx. However, now when I try to run npm start, I get the following error:

TypeError: products.find is not a function
Product
src/components/products/ProductShow.tsx:27
  24 | const Product: React.FC<Props> = ({ products, addToCart, user }) => {
  25 |   const { productId } = useParams<{productId: string}>()
  26 |   console.log(products)
> 27 |   const product = products.find(product => product.id === parseInt(productId));
  28 | 
  29 |   return (
  30 |     <div>

(I should note that products are passed down as an array of objects from a higher component)

At this point, I'm not sure if I'm defining my prop types incorrectly or if there is another issue with how I've set everything up. Any help with either configuring my project so that .tsx files can be imported or defining my prop types correctly would be greatly appreciated!

Here is my tsconfig.json file

{
  "compilerOptions": {
    "target": "es2016",
    "module": "esnext",
    "noImplicitAny": true,
    "esModuleInterop": true,
    "removeComments": true,
    "preserveConstEnums": true,
    "sourceMap": true,
    "strict": true,
    "jsx": "react-jsx",
    "allowJs": true,
    "checkJs": false,
    "lib": [
      "dom",
      "dom.iterable",
      "esnext"
    ],
    "skipLibCheck": true,
    "allowSyntheticDefaultImports": true,
    "forceConsistentCasingInFileNames": true,
    "noFallthroughCasesInSwitch": true,
    "moduleResolution": "node",
    "resolveJsonModule": true,
    "isolatedModules": true,
    "noEmit": true
  },
  "include": [
    "src"
  ]
}


from Recent Questions - Stack Overflow https://ift.tt/3FWqKd9
https://ift.tt/eA8V8J

Guzzle Symfony scrape iframes inside multiple Servers

I am building a scraper to scrape content using guzzle and symfony dom crawler But I run into an issue.

The page I am scraping has multiple Iframe servers They default iframe is shown when the scraper loads the page but in order to get the other servers it needs to click there buttons and so it reflects the server iframe.

How do I do that?



from Recent Questions - Stack Overflow https://ift.tt/3cXELLg
https://ift.tt/eA8V8J

Couchbase Update query divide

I am trying to update the document using the UPDATE query statement on the couchbase. EX) UPDATE Users SET cityIndex = 1 where Users.city= "NewYork";

There was so much data that I wanted to divide 3,000 to 4,000 and proceed with the UPDATE. How should I proceed? There is PRIMARY INDEX.



from Recent Questions - Stack Overflow https://ift.tt/3o18zgi
https://ift.tt/eA8V8J

React Router (Version 6.0.2) makes page unresponsive ReactJS

When I use the new navigate function in ReactJS React Router DOM it makes the page unresponsive. I think there is an error in my code because when I add the navigate react-router-dom function to a button like () => navigate('home') it works. But, when I add it in useEffect it doesn't work, the page becomes unresponsive.

Please check my code if there are any errors because I do not know a lot about this.

import React, { useEffect, useState } from 'react';

import './App.css';

import {
  BrowserRouter as Router,
  Routes,
  Route,
  Link,
  Navigate,
  useNavigate
} from "react-router-dom";
import Home from './Pages/Home';
import Login from './Pages/Login';

import {
  getAuth,
  createUserWithEmailAndPassword,
  signInWithEmailAndPassword,
  onAuthStateChanged
} from "firebase/auth";

import {auth} from './Config';


function App() {

  const [authenticated, setAuthenticated] = useState(false);

  const navigate = useNavigate();

  useEffect(() => {
    onAuthStateChanged(auth, (user) => {
      if (user) {
        setAuthenticated(true);
        navigate('home') // doesn't work
      } else {
        setAuthenticated(false);
      }
    })
  })

  const signup = (email, pass) => {
    createUserWithEmailAndPassword(auth, email, pass)
      .then((userCredential) => {
        const user = userCredential.user;
        console.log(user)
        navigate('home')
      })
      .catch((error) => {
        const errorCode = error.code;
        const errorMessage = error.message;
        console.log(errorMessage)
      });
  }

  const login = (email, pass) => {
    signInWithEmailAndPassword(auth, email, pass)
      .then((credentials) => {
        console.log(credentials.user);
      })
      .catch((error) => {
        console.log(error.message);
      })
  }

  return (
    <Routes>
      <Route path='/' element={<button className="primary-button" onClick={() => navigate('home')}>Home</button>} /> // this works
      <Route path="/home" element={<Home authenticated={authenticated} />}></Route>
      <Route path="/auth" element={<Login signup={signup} login={login} />}></Route>
    </Routes>
  );
}

function AppWrapper() {
  return (
    <Router>
      <App />
    </Router>
  )
}

export default AppWrapper;


from Recent Questions - Stack Overflow https://ift.tt/3paJWNF
https://ift.tt/eA8V8J

How do you wait for a pod's containers to start running?

I'm trying to wait for a pod's non-init container(s) to start running.

Things I have tried that do not work:

kubectl wait pod/mypod --for condition=initialized

This condition is met once the initContainers have started. This is too early. I want to wait until the initContainers have completed and the main containers have started.

kubectl wait pod/mypod --for condition=containersReady
kubectl wait pod/mypod --for condition=ready

Both of these are too late. I want the condition to be met before the pod's containers' startupProbe have completed.

I want to wait until the containers are running, but not have to wait until the startup/readiness probes are satisfied. How do I do this?



from Recent Questions - Stack Overflow https://ift.tt/3I4mzOs
https://ift.tt/eA8V8J

My browser lags when I try a loop function?

I wrote a simple nested loop function to multiply all items in an array and output the total value, but each time is I run a loop function my browser either crashes or doesn't stop loading

function multiplyAll(arr){ 
        Let product = 1;
       for(let i = 0; i < 
            arr.length; i++){
            for(let j = 0; j < 
     arr[i].length; j *= product);
     }
      return product;
}

multiplyAll([[1], [2], [3]]);


from Recent Questions - Stack Overflow https://ift.tt/3rkCh1L
https://ift.tt/eA8V8J

How do I call inside the layout activity_main(sw320dp)?

I want to call gray (sw320dp) layout file

Layout:- activity_main(sw320dp)

public class MainActivity extends AppCompatActivity {

@Override
protected void onCreate (Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main(sw320dp));
 }
}

enter image description here



from Recent Questions - Stack Overflow https://ift.tt/3pbvbdw
https://ift.tt/3D7t3Im

Custom query to fetch all entries of a table and that only contains first of many duplicates based on a specific column

I have a Location model and the table looks like

id name vin ip_address created_at updated_at
0 default 0 0.0.0.0/0 2021-11-08 11:54:26.822623 2021-11-08 11:54:26.822623
1 admin 1 10.108.150.143 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
2 V122 122 10.108.150.122 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
3 V123 123 10.108.150.123 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
4 V124 124 10.108.150.124 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
5 V122 122 10.108.150.122 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
6 V125 122 10.108.150.125 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885

My method in the Location model

   def self.find_all_non_duplicate
     return self.find(:all, :conditions => "id <> 1")
   end

I want to fetch all entries of the locations table except the entry with id = 1 and that contains only the first entry of many duplicates based on the column ip_address.

Since ip_address of id = 2 and id = 5 is duplicate. I want to keep the first entry of many duplicates i.e., id = 2.

The expected result is

id name vin ip_address created_at updated_at
0 default 0 0.0.0.0/0 2021-11-08 11:54:26.822623 2021-11-08 11:54:26.822623
2 V122 122 10.108.150.122 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
3 V123 123 10.108.150.123 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
4 V124 124 10.108.150.124 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885
6 V125 122 10.108.150.125 2021-11-08 11:54:26.82885 2021-11-08 11:54:26.82885

The entries with id's 1 and 5 to be ignored



from Recent Questions - Stack Overflow https://ift.tt/2ZxpOfF
https://ift.tt/eA8V8J

How to store API JSON response in MYSQL database

I'm using an API that provides the data in JSON format. I'm trying to store the JSON response in the MySQL database (just as it is)

and then refetch it from the database in JSON format. You may be wondering why I'm doing this, well, I'm using a paid API that has limited no. of requests. To prevent multiple API calls, I wanted to serve API responses through my server (So basically application users would be polling my server to fetch the JSON response Instead of directly calling an API)

So I created a table named "matchinfo" and there is a column named "jsondata" which has a type of LONGTEXT

$json_response = file_get_contents("api_url"); // storing json format response

$update_data = "UPDATE matchinfo SET jsondata = '$json_response'"; // Succesfully stored it

$update_query = mysqli_query($conn,$update_data); 


// how can I again fetch it in the JSON format 



from Recent Questions - Stack Overflow https://ift.tt/3rjnGUl
https://ift.tt/eA8V8J

How to create ranges inside a Select in a sql clause

I have a table that looks like this:

+---------+-------+------+------+----------+
|cd_cli   |vl_ren |max_t0|max_12|dt_mvtc   |
+---------+-------+------+------+----------+
|514208   |1040.00|0     |0     |2017-01-31|
|30230361 |3720.00|0     |0     |2017-01-31|
|201188220|2742.00|0     |0     |2017-01-31|
|204080612|2968.00|0     |0     |2017-01-31|
|209727665|860.00 |0     |0     |2017-01-31|
|212491854|792.00 |0     |0     |2017-01-31|
|300597652|1663.00|0     |0     |2017-01-31|
|300836378|2366.00|0     |0     |2017-01-31|
|301040450|3394.00|0     |0     |2017-01-31|
|302394154|2218.00|0     |0     |2017-01-31|
+---------+-------+------+------+----------+

And I want to select:

vlren = spark.sql('''select dt_mvtc,
                        vl_ren,
                        max_t0,
                        max_12,
                        count(cd_cli) as count_cd_cli
                 from table_xx
                 group by dt_mvtc,vl_ren,max_t0,max_12
                 order by dt_mvtc''')

But the group by is not quite well because the values for vl_ren are sometimes very close to one another - they can differ by 0.01 - thus I am trying to group them by ranges, but I am not sure how to put the ranges inside the select clause:

    %%time
%%spark

vlren = spark.sql('''select dt_mvtc,
                            vl_ren,
                            max_t0,
                            max_12,
                            count(cd_cli) as count_cd_cli
                          CASE
                              WHEN vl_ren >= 0 AND vl_ren < 1000 THEN 0
                              WHEN vl_ren >= 1000 AND vl_ren < 2000 THEN 1
                              WHEN vl_ren >= 2000 AND vl_ren < 3000 THEN 2
                              WHEN vl_ren >= 3000 THEN 3
                           END AS values
                        FROM
                          vl_ren
                        ) AS vl_ren_range
                     GROUP BY dt_mvtc,vl_ren_range.values,max_12
                     from sbx_d4n0cbf.renda_presumida 
                     order by dt_mvtc''')

The expected output is to have is this right? are there any other better aproach?



from Recent Questions - Stack Overflow https://ift.tt/32JtzQx
https://ift.tt/eA8V8J

How to convert float to int and concatenate with a string?

I have an array of years ,por. The years are currently floats and include a decimal like 1942.0. I want to remove the decimal place and add "-12-31" so that I have an array with entries that look like "1942-12-31". I wrote the loop below but when I run it, the decimal remains and the first few instances of the array remain unchanged. Where am I going wrong?

por=CMStations.por
for i in por:
    int(i)
    por.loc[i]=str(i)+"-12-31"


from Recent Questions - Stack Overflow https://ift.tt/2ZyDTJM
https://ift.tt/eA8V8J

Typescript: Create recursive mapped type that maps from existing type to string

Given a type how can I write a recursive mapped type that yields a type that is with all the same keys but with their types being strings instead of whatever their incoming type is? Specifically, I want to handle nested objects & arrays.

type MySourceType = {
  field1 :string,
  field2: number,
  field3: number[],
  field4: Date, 
  field5: {
    nestedField1: number,
    nestedField2: number[]
    nestedField3: Date,
  }
}
type MyDestinationType = MakeAllFieldsString<MySourceType>;

should yield:

type MyDestinationType = {
    field1 :string,
    field2: string,
    field3: string[],
    field4: string, 
    field5: {
      nestedField1: string,
      nestedField2: string[]
      nestedField3: string,
    }
  }

this works for a regular "flat" object but fails to handle the nested objects and arrays

type JsonObject<T> = {[Key in keyof T]: string; }

I also tried this but it didn't seem work do what I expected either.

type NestedJsonObject<T> = {
[Key in keyof T]: typeof T[Key] extends object ? JsonObject<T[Key]> : string;
}


from Recent Questions - Stack Overflow https://ift.tt/3I7q1HR
https://ift.tt/eA8V8J

Node.js writing parse data to a variable

I'm writng a node.js app to read messages from Serial Port. Reading data and logging it into console works fine, althought I'm wondering how to save data value from Serial Port to a variable. I want to pass it further to a MySQL, so I need the data to be stored in variable. I tried to use global variable, but it keeps saying "undefined". I also tried to pass the value using return in js function, but it doesn't work too. Here's my code:

var SerialPort = require('serialport');
const parsers = SerialPort.parsers;
const parser = new parsers.Readline({
    delimiter: '\r\n'
});

var port = new SerialPort('COM10',{ 
    baudRate: 9600,
    dataBits: 8,
    parity: 'none',
    stopBits: 1,
    flowControl: false
});

port.pipe(parser);

parser.on('data', function(data) {
    
    console.log('Received data from port: ' + data);
});

Please tell me how to store data from parser.on in a variable.



from Recent Questions - Stack Overflow https://ift.tt/3E7G6Ld
https://ift.tt/eA8V8J

How to use Async in Spring Boot?

Below is my code.

With my below code, different thread ids are not getting created.

The output has same thread id.

@Controller
@RequestMapping(value = "/Main")
public class MyController 
{
    @Autowired
    private MyService myService;
   
   @PostMapping("/Sub")
   @ResponseBody
   public String readInput(@RequestBody String name)
   {
       for (int i = 0;i<5;i++)
       {
           myService.asyncMethod();
       }
       return "Success";
   }
}

With my below code, different thread ids are not getting created.

@Repository
@Configuration
@EnableAsync
public class MyService {

    @Bean(name = "threadPoolTaskExecutor")
       public Executor threadPoolTaskExecutor() {
          return new ThreadPoolTaskExecutor();
       }
     
     @Async("threadPoolTaskExecutor")
     public void asyncMethod() {
        System.out.println("Thread " + Thread.currentThread().getId()+ " is running");
     }
}


from Recent Questions - Stack Overflow https://ift.tt/3FSxSqM
https://ift.tt/eA8V8J

Scraping and writing the table into dataframe shows me TypeError

I am trying to scraping the table and write in a dataframe they show me a typeerror. How to resolve these errors?

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.select import Select
from selenium import webdriver
import pandas as pd
temp=[]
driver= webdriver.Chrome('C:\Program Files (x86)\chromedriver.exe')
driver.get("https://www.fami-qs.org/certified-companies-6-0.html")
WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe[title='Inline Frame Example']")))
headers=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//table[@id='sites']//thead"))).text
rows=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//table[@id='sites']//tbody"))).text
temp.append(rows)
df = pd.DataFrame(temp,columns=headers)
print(df)

In headers I pass the data FAMI-QS Number ... Expiry date while in rows I will pass the FAM-0694 ... 2022-09-04



from Recent Questions - Stack Overflow https://ift.tt/3pd6dtX
https://ift.tt/eA8V8J

Google Apps Script - Web App - Console Status Update Text Messages

I have a working Google Apps script that displays a single web app output at the end.

The script lasts about ~20 seconds, and I'm looking to improve the user experience, by intermittently update the script status during this period.

I understand the challenges of asynchronous server/client operation, but I'm surprised that such "trivial" functionaly seems quite hard to realize.

I reviewed similar topics, but miss a good example.

Code Example :

function doGet(e){
  output = function1();
  output = function2();
  return HtmlService.createHtmlOutput(output).
}

function function1() {
 //DoSomething
 return = "output1";
}

function function2() {
 //DoSomething
 return = "output2";
}

I'm not looking to replace the function calls by HTML calls. But maybe continously poll a global variable until script execution is completed, if this is feasible?

Any example code will be highly appreciated.

Best Regards, Kristof



from Recent Questions - Stack Overflow https://ift.tt/3xBYfP9
https://ift.tt/eA8V8J

Can not select MUI Autocomplete options

I am trying to use <Autocomplete/> for searching members with member name or their company name like there are two members.

Name: "user 1"
company: ""
id:1
Name: "user 2"
company:"company 1"
id:2

now if I search user 2 user 2 should come. if I search company 1 user 2 should come.

Here is codesandbox.io for this https://codesandbox.io/s/distracted-bassi-tqm7e?file=/src/App.tsx:766-1850

I have this working but the problem I am facing is while customizing options UI.

I want to display company in smaller font and grey color and member name in regular font.

And I have a styled option, but now I can't select that options.

Here is my code.

<Autocomplete
    id="free-solo-demo"
    freeSolo
    options={suggestion}
    onSelectCapture={(e) => { console.log(e) }}
    getOptionLabel={(option: any) =>
        `${option.name} ${option.companyName}`
    }
    selectOnFocus={true}
    renderOption={(option: any) => {
        console.log("option");
        console.log(option);
        return <h4>{`${option.key}`}</h4>;
    }}

    onChange={(event, value) => HandleUserProfile(value)}
    renderInput={(params) => (
        <TextField
            {...params}
            placeholder="Search..."
            onChange={handleChange}
        />
    )}
/>


from Recent Questions - Stack Overflow https://ift.tt/3rlRjEI
https://ift.tt/eA8V8J

How could I adjust the height of each azure service in Visual Studio Code?

VS code capture image

As shown in the capture image(with red arrow in the image), I want to adjust the height of each Azure service. Please let me know how to do this. Thanks.



from Recent Questions - Stack Overflow https://ift.tt/3rkaY82
https://ift.tt/eA8V8J

How to center items in a bootstrap column such that every item starts in the same position relative to the X axis?

In order to replicate my problem, I wrote a simple HTML with 2 bootstrap columns, I would like my second column to have multiple <p> items , each one of them below the other, however the issue arises when the strings contained in the <p> tag are of different lengths, since I want them to be in the center of the column but to also start in the same position relative to the X-axis.

<body>
<h2 class="text-center my-5">
    title
</h2>
<div class="container">
    <div class="row">
        <div class="col-md-6 ">
            Some unrelated text
        </div>

        <div class="col-md-6">
            <p>Small text</p>

            <p>A much longer text</p>

        </div>

    </div>

</div>

Essentially I would like the string "A much longer text" to be exactly below "Small text", so that it would look this way:

Small text
A much longer text

Instead of

   Small text
A much longer text

I tried styling the column with text-align:center as well as using flexbox and align-items:center but they both produce the same result.



from Recent Questions - Stack Overflow https://ift.tt/3nW61jB
https://ift.tt/eA8V8J

Android language change and settings are completely broken inside app downloaded from Play Store

I have a function to rewrite and pick app language based on system language while you install an app. This function is working as expected if you build your app from Android Studio (even release build). Also, I can confirm it is working from Play Store on all Androids except Android 10, 11 and 12.

It looks like I will correctly pick locale according to my logs and I will rewrite resource config, but after activity restart, it will jump to English as default no matter what system language is currently set (even if default locale in code is correct - in my case Czech (lang code "cs").

As I said, it is caused by the Google Play version of APK, not from Android Studio one.

Is there any undocumented change by Google Play Terms of Service, that they are blocking resource config to be read-only if uploaded on Play Store since Android 10?

Here is function:

fun applyLanguage() {
        val defaultLocale = startupLocale
        val langs = App.languages
        val langCode = app.languageIndex.let {
            if (it == 0) {
                if(langs.any{ l -> l.first==defaultLocale.language }) {
                    defaultLocale.language
                }else {
                    langs[App.LANGUAGE_INIT_DEFAULT].first
                }
            } else {
                langs[it - 1].first
            }
        }
        App.log("LangChange: MainActivity -> applyLanguage (langCodeSet) $langCode")
        app.sysLog("LangChange: MainActivity -> applyLanguage (langCodeSet) $langCode")
        if (resources.configuration.locale.language != langCode) {
            val l = if (langCode == defaultLocale.language) {
                defaultLocale
            } else
                Locale(langCode, "", "")
            arrayOf(resources, app.resources).forEach { res ->
                val cfg = res.configuration
                cfg.locale = l
                res.updateConfiguration(cfg, res.displayMetrics)
            }
            Locale.setDefault(l)
        }
        app.langCode = if (langs.any { it.first == langCode }) langCode else "cs"
        App.log("LangChange: MainActivity -> applyLanguage (langCode) ${app.langCode}")
        app.sysLog("LangChange: MainActivity -> applyLanguage (langCode) ${app.langCode}")
    }

Its simple function, I have an array of available languages (in my case it's 3 of them based on available resources - translates) and if the default system language is one of them I will set the app in that language, if it's not I will set Czech as default. So if I pick English, I should have the app in English, if I pick German, I should have the app in German, if I pick Czech, I should have the app in Czech and if I pick any other language (for example French) it should be set to Czech as a fallback is there.

Also, the same function is used for language picker in App settings and it's the same issue. Default locale has langCode "cs" but if I pick any of those languages from the picker, it will always set resources to default state (string.xml file) which is of course English.

Another example, I setup default language as French in device settings. I downloaded an app from store and it correctly rewrites resources locale to Czech (language code "cs"). But app was still in English.

So, resources.configuration.locale.language was "cs" after activity restart, but this resource config was completely ignored by the system and system picked default resource xml - string.xml which is English.

So it looks like you cant rewrite the resources config anymore, or technically you can, but this altered resource config is completely ignored by the system.

UPDATE

Additional debugging.

Android 10: Default language (French): App was installed and default language was set to English(suppose to be Czech). If you change language in settings, no matter what language you pick, it will always be set to English.

Android 11: Default language (French): App was installed and default language was set to Czech(correct). If you change your language in settings it gets interesting: If you change to English, app switches to English. If you change back to Czech, app switches to Czech. If you change to German, app switches to English (I dont know whats going on).

Android 12: Default language (French): App was installed and default language was set to English(suppose to be Czech). If you change to English, app switches to English. If you change back to Czech, app switches to English. If you change to German, app switches to German.

Android 9, 8, 7, 6 (and probably lower) - working as intended.

I'm not sure whats going on but its kinda funny.



from Recent Questions - Stack Overflow https://ift.tt/3I42asQ
https://ift.tt/eA8V8J

2021-11-29

AttributeError: 'NoneType' object has no attribute 'lower'; password validation

When registering a User, the desired user password is checked against a list of disallowed passwords. Yet when the password is passed to a validator method, the following error is raised:

AttributeError: 'NoneType' object has no attribute 'lower'

Why is the validate() method being invoked as is if password is None when in fact it is truthy?


from django.contrib.auth.models import User
from django.contrib.auth.password_validation import (
    validate_password, CommonPasswordValidator, NumericPasswordValidator
)

from rest_framework import serializers


class LoginSerializer(UsernameSerializer):

    password = serializers.RegexField(
        r"[0-9A-Za-z]+", min_length=5, max_length=8
    )

    def validate(self, data):
        username = data['username']
        password = data['password']
        try:
            validate_password(password, password_validators=[
                CommonPasswordValidator, NumericPasswordValidator
            ])
        except serializers.ValidationError:
            if username == password:
                pass
            raise serializers.ValidationError("Invalid password")
        else:
            return data

    class Meta:
        fields = ['username', 'password']
        model = User
-> for validator in password_validators:
(Pdb) n
> \auth\password_validation.py(46)validate_password()
-> try:
(Pdb) n
> \auth\password_validation.py(47)validate_password()
-> validator.validate(password, user)
(Pdb) password
'Bingo'
(Pdb) user
(Pdb) password
'Bingo'
(Pdb) s
--Call--
> \auth\password_validation.py(180)validate()
-> def validate(self, password, user=None):
(Pdb) password
(Pdb) n
> \django\contrib\auth\password_validation.py(181)validate()
-> if password.lower().strip() in self.passwords:
(Pdb) n
AttributeError: 'NoneType' object has no attribute 'lower'


from Recent Questions - Stack Overflow https://ift.tt/3cWVNJn
https://ift.tt/eA8V8J

On the allocation of static data in the MIPS architecture

When discussing the 32-bit MIPS architecture, Patterson--Hennessy explain that the static data segment starts at 0x 1000 0000, ends at 0x 1000 FFFF, with the global pointer $gp set by default to the middle address 0x 1000 8000. It is stated that the heap is next, and should thus start at 0x 1001 0000.

Some experimenting with MARS however tells me that there is an additional segment lying in between, which goes from 0x 1001 0000 to 0x 1003 FFFF, so that the heap only starts at 0x 1004 0000. Indeed when I store say an array on the heap using a syscall, this array will be stored in 0x 1004 0000 onwards.

This additional segment seems to get used when I initialise data under the .data header of my program. This confuses me, as I was under the expectation that data initialised under .data was to be considered static, and should therefore be stored in the segment governed by the global pointer.

Question. Is the behaviour exhibited by MARS standard? If yes, in what way does this additional data segment, lying between the static data and the heap, differ from the static data segment lying in front of it?



from Recent Questions - Stack Overflow https://ift.tt/3lcEHM4
https://ift.tt/eA8V8J

react JS import Files and use specific one based on Condition

Im trying to show on a google map markers based on lat and lng from a json file.

I'm importing the files like that:

import * as crimeData from "../resources/newfile.json";
import * as aggAssault from "../resources/categories/AGG_ASSAULT.json";
import * as autoTheft from "../resources/categories/AUTO_THEFT.json";
import * as burglaryNonres from "../resources/categories/BURGLARY-NONRES.json";
import * as burglaryResidence from "../resources/categories/BURGLARY-RESIDENCE.json";
import * as homocide from "../resources/categories/HOMICIDE.json";
import * as larcenyFromVehicle from "../resources/categories/LARCENY-FROM_VEHICLE.json";
import * as larcenyNonVehicle from "../resources/categories/LARCENY-NON_VEHICLE.json";
import * as rape from "../resources/categories/RAPE.json";
import * as robberyCommercial from "../resources/categories/ROBBERY-COMMERCIAL.json";
import * as robberyPedestrian from "../resources/categories/ROBBERY-PEDESTRIAN.json";
import * as robberyResidence from "../resources/categories/ROBBERY-RESIDENCE.json";

each file is a category of crime.

when i use this it works:

function Map() {
            return (
            <>
            <GoogleMap 
            defaultZoom={10}
            defaultCenter= 
            >
                {crimeData.crimes.map((crime) => (
                    <Marker key={Math.random()} position =  />
                )
                )}
            </GoogleMap>
            </>
            );
        }

        const WrappedMap = withScriptjs(withGoogleMap(Map));

but I need to change crimeData to one of the other files based on a condition.

i have tried this:

const array= [aggAssault, autoTheft, burglaryNonres, burglaryResidence, homocide, larcenyFromVehicle, larcenyNonVehicle, rape, robberyCommercial, robberyPedestrian, robberyResidence ]

and replaced crimeDate with array[1] for example and I get an error. the error is a type error, i guess its not taking array[2] as an imported json file any idea on how to do it. Thank you.



from Recent Questions - Stack Overflow https://ift.tt/3D3GadL
https://ift.tt/eA8V8J

How to update tables using SQL Alchemy ORM in python

Hi I am trying to update multiple tables with sql alchemy orm in python. I am new to sql alchemy orm. I wanted to make sure I am in the right direction how to properly query a table and update the tables using sql alchemy orm.

Here is what I am trying to accomplish according to this graph: cars

Here is my code

from cars.models import Cars
from cars.models import CarType
from cars.models import CarModel
from cars.models import CarItem, CarItemOther
from sqlalchemy import create_engine
from sqlalchemy.orm import session
from sqlalchemy.sql.expression import update
from sqlalchemy.sql.sqltypes import String, Text

# To update cars
def update_cars_models(session):
    db = session

    state = db.query(Cars).filter(Cars.state == "PRE_PROCESS").all()

    car_update = ()
    for row in state:
      if (row.state == "PRE_PROCESS"):
        car_update = CarType("Subaru", "WHITE", "PRE_PROCESS", Cars("Subaru", "PROCESS"))

        # update and commit
        update(car_update)
        db.commit()

    db.close()

    return car_update

# To update each car models
def update_cars(car_update, session):

    db = session
    cars = car_update

    cars_models = ()
    for row in cars:
      if (row.state == "PRE_PROCESS"):
        cars_models = CarModel("PRE_PROCESS", CarType("Subaru", "PROCESSED"), CarItem("PROCESSED", CarItemOther("Payment Processed")))
        
        # update and commit
        update(cars_models)
        db.commit()
    
    car_process = db.query(CarType).filter(CarType.state == "PROCESSED").all()
    cars_models_process = ()
    for row in car_process:
      if (row.state == "PROCESSED"):
        cars_models_process = CarModel("PROCESSED", CarType("Subaru", "PROCESSED"), CarItem("PROCESSED", CarItemOther("Payment Processed")))

        # update and commit
        update(cars_models_process)
        db.commit()

    # close session
    db.close()

    return cars_models
   
if __name__ == "__main__":
      
    # To update cars
    car_up = update_cars_models(session)
    update_cars(car_up, session)


from Recent Questions - Stack Overflow https://ift.tt/3rpAhoS
https://ift.tt/3D2mnew

How to turn off the scan for the war files in Jetty10

I am working with Jetty10 and by defalut, war scan is enabled and the scan interval is set to 1 second. It means Jetty scans the complete web apps directory every 1 sec. Please correct me if I am wrong. the below code is in the jetty\etc\jetty-deploy.XML

 <Set name="scanInterval"><Property name="jetty.deploy.scanInterval" default="1"/></Set>

I don't want that burden to my application and turning this scan off will reduce the overhead of jetty scanning complete web apps every 1 sec.

So, my question is how can we turn off this scan? do we need to set it to -1 or is there any approach to do this?



from Recent Questions - Stack Overflow https://ift.tt/3xvFnkF
https://ift.tt/eA8V8J

Expression template implementation in Rust like in boost::yap

I am trying to teach myself Rust and as a challenging learning project I want to replicate the design pattern of the C++ expression template library boost::yap. I don't want a full fledged implementation, I just want a small demonstrator to find out if Rust's generics are powerful enough to make it happen and learn something along the way.

I have come up with an idea but am currently stuck. My question is twofold:

  • Is there currently a principle barrier that makes expression templates with the transform feature (see boost::yap, or my code below) impossible in Rust?
  • If no, how can I make it work?

Here is what I have come up with so far.

I have an enum E that represents all supported operations. In practice, it would take two generic parameters representing the left and right hand side expressions of any binary operation and would have variants called Add, Mul, Sub and so on. I would implement the traits std::ops::{Add, Mul, Sub} etc. for E<U>.

For demonstration purposes however, let's assume that we only have two variants, Terminal represents an expression wrapping a value, and Neg is the only supported unary operation as of now.

use std::ops::Neg;

enum E<U> {
    Terminal(U),
    Neg(U)
}

impl<U> Neg for E<U> {
    type Output = E<E<U>>;
    fn neg(self) -> Self::Output {
        E::Neg(self)
    }
}

Next, I implement a trait Transform that lets me traverse an expression via its subexpressions with a closure. The closure will stop the recursion once it returns Some(_). This is what I have come up with (code does not compile):

trait Transform<Arg = Self> {

    fn transform<R,F>(&self, _f: F) -> Option<R>
    where F: FnMut(&Arg) -> Option<R> 
    {
        None
    }
}

impl<U> Transform for E<U> 
where U : Transform<U> + Neg
{
    fn transform<R,F>(&self, mut f: F) -> Option<R>
    where F: FnMut(&Self) -> Option<R>
    {
        // CASE 1/3: Match! return f(self)
        if let Some(v) = f(self) { return Some(v); };

        match self {
            E::Terminal(_) => None, // CASE 2/3: We have reached a leaf-expression, no match!
            E::Neg(x) => {      // CASE 3/3: Recurse and apply operation to result
                x.transform(f).map(|y| -y) // <- error[E0277]: expected a `FnMut<(&U,)>` closure, found `F`
            }
        }
    }
}

Here is the compiler error:

error[E0277]: expected a `FnMut<(&U,)>` closure, found `F`
  --> src/main.rs:36:29
   |
36 |                 x.transform(f).map(|y| -y) // <- error[E0277]: expected a `Fn<(&U,)>` closure, found `F`
   |                             ^ expected an `FnMut<(&U,)>` closure, found `F`
   |
help: consider further restricting this bound
   |
28 |     where F: FnMut(&Self) -> Option<R> + for<'r> std::ops::FnMut<(&'r U,)>
   |                                        +++++++++++++++++++++++++++++++++++

This is my Issue 1/2: I want to pass in a closure that can work on both Self and on U for E<U> (and thus accepts also E<E<U>> and E<E<E<U>>>...). Can this be done for generic types in Rust? Or if my approach is wrong, what's the right way of doing this? In C++ i would use SFINAE or if constexpr.

Here is a little test for the expression template library, to see how this can be used:

fn main() {
    //This is needed, because of the trait bound `U: Transform` for `Transform`
    //Seems like an unnecessary burden on the user...
    impl Transform for i32{}

    // An expression template 
    let y = E::Neg(E::Neg(E::Neg(E::Terminal(42))));

    // A transform that counts the number of nestings
    let mut count = 0;
    y.transform(|x| {
        match x {
            E::Neg(_) => {
                count+=1;
                None
            }
            _ => Some(()) // must return something. It doesn't matter what here.
        }
    });
    assert_eq!(count, 3);
    
    // a transform that replaces the terminal in y with E::Terminal(5)
    let expr = y.transform(|x| {
       match x {
           E::Terminal(_) => Some(E::Terminal(5)),
           _ => None
       } 
    }).unwrap();
    
    // a transform that evaluates the expression
    // (note: should be provided as method for E<U>)
    let result = expr.transform(|x| {
        match *x {
            E::Terminal(v) => Some(v),
            _ => None
        }
    }).unwrap();
    assert_eq!(result, -5);
}

My Issue 2/2 is not a deal breaker, but I am wondering if there is some way that I can make the code work without this line:

impl Transform for u32{}

I think having to do this is a nuisance for the user of such a library. The problem is, that I have the trait bound U: Transform on the implementation of Transform for E<U>. I have the feeling the unstable specialization feature might help here, but it would be awesome if this could be done with stable Rust.

Here is the rust playground link.

Edit:

If anyone else stumbles over this, here is a rust playground link that implements the solution of the accepted answer. It also cleans up some minor buggy stuff in the code above.



from Recent Questions - Stack Overflow https://ift.tt/3p88Ope
https://ift.tt/eA8V8J

Web scraping with cheerio not working with some elements

I just started learning about web scraping and I found this tutorial: https://www.mundojs.com.br/2020/05/25/criando-um-web-scraper-com-nodejs/

It works fine, however I'm trying to get different elements from the same webpage: https://ge.globo.com/futebol/brasileirao-serie-a/

With the group of classes of the tutorial it brings all the elements with the selected class, but with other classes it doesn't work:

enter image description here

As can be seen all fifty elements with the class ranking-item-wrapper are returned, but if I select elements with the class lista-jogos__jogo it doesn't return anything:

enter image description here enter image description here

I don't get why I'm getting this error, since I'm doing exectly the same thing as it is done in the tutorial.

Here is a short version of the code:

const axios = require('axios');
const cheerio = require('cheerio');
const url = 'https://ge.globo.com/futebol/brasileirao-serie-a/';

axios(url).then(response => {
  const html = response.data;
  const $ = cheerio.load(html);
  console.log($('.ranking-item-wrapper')) // => tutorial class
  console.log('***')
  console.log($('.lista-jogos__jogo')) // => class that I'm using
}).catch(console.error);


from Recent Questions - Stack Overflow https://ift.tt/313QmWS
https://ift.tt/3D0Fupx

How to order the tick labels on a discrete axis (0 indexed like a bar plot)

I have a dataframe with this data and want to plot it with a bar graph with x-axis labels being months

import pandas as pd

data = {'Birthday': ['1900-01-31', '1900-02-28', '1900-03-31', '1900-04-30', '1900-05-31', '1900-06-30', '1900-07-31', '1900-08-31', '1900-09-30', '1900-10-31', '1900-11-30', '1900-12-31'],
        'Players': [32, 25, 27, 19, 27, 18, 18, 21, 23, 21, 26, 23]}
df = pd.DataFrame(data)

  Birthday Players
1900-01-31      32
1900-02-28      25
1900-03-31      27
1900-04-30      19
1900-05-31      27
1900-06-30      18
1900-07-31      18
1900-08-31      21
1900-09-30      23
1900-10-31      21
1900-11-30      26
1900-12-31      23

This is what I have

import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates

fig = plt.figure(figsize=(12, 7))
locator = mdates.MonthLocator()
fmt = mdates.DateFormatter('%b')
X = plt.gca().xaxis
X.set_major_locator(locator)
X.set_major_formatter(fmt)
plt.bar(month_df.index, month_df.Players, color = 'maroon', width=10)

but the result is this with the label starting from Feb instead of Jan

enter image description here



from Recent Questions - Stack Overflow https://ift.tt/32JcZAp
https://ift.tt/311btZT

Create web site using asp .net 2012 [closed]

Create a new product registration like add_product.aspx have: Product name - product number - country of manufacture - type - year of manufacture - price. Country of Manufacture drop down list



from Recent Questions - Stack Overflow https://ift.tt/316G3Rw
https://ift.tt/eA8V8J

Split an array when meets a certain number?

Hello I need to return count of chunks in given array of numbers.

Chunk can be defined assequence of one or more numbers separated by one or more zeroes.

Example: array [5, 4, 0, 0, -1, 0, 2, 0, 0] contains 3 chunks

so the answer should be 3 since the array can be split into three chunks.

Can you help me with the solution to this one?

Ive looped through the array but dont know how to deal with the multiple zeros.



from Recent Questions - Stack Overflow https://ift.tt/2ZCbEdq
https://ift.tt/eA8V8J

Discord.py SQLite3 Banned Word System - Issue

So, i tried making a banned word system using sqlite3, but i've ran into a issue and it doesn't error at all nor does it work

My code: ( yes i imported sqlite3 ) & the formatting is correct, its just the code it self


        @commands.Cog.listener()
        async def on_message(self, member):
            db = sqlite3.connect('.//SQL//bannedwords.sqlite')
            cursor = db.cursor()
            cursor.execute(f'SELECT msg FROM bannedwords WHERE guild_id = {message.guild.id}')
            result = cursor.fetchone()
            if result is None:

                return
            else:

                cursor.execute(f"SELECT msg FROM main WHERE guild_id = {member.guild.id}")
                result = cursor.fetchone()
                await message.author.delete()
                embed=discord.Embed(title="Blacklisted Word", description="Test")
                await message.send(embed=embed, delete_after=7.0)






    @commands.group(invoke_without_commands=True)
    async def add(self, ctx):
        return







    @add.command()
    async def word(self, ctx, channel:discord.TextChannel):
        if ctx.message.author.guild_permissions.administrator:
            db = sqlite3.connect('.//SQL//bannedwords.sqlite')
            cursor = db.cursor()
            cursor.execute(f'SELECT msg FROM bannedwords WHERE guild_id = {ctx.guild.id}')
            result = cursor.fetchone()
            if result is None:
                sql = ("INSERT INTO bannedwords(guild_id, msg) VALUES(?,?)")
                val = (ctx.guild.id, msg)
                await ctx.send(f"h")
            elif result is not None:
                sql = ("UPDATE bannedwords SET msg = ? WHERE guild_id = ?")
                val = (msg, ctx.guild.id)
                await ctx.send(f"added")
            cursor.execute(sql, val)
            db.commit()
            cursor.close()
            db.close()

I am aware that i put a text channel, but i don't think thats the only issue - or rather i'm not too sure on what do i replace it with for it to detect messages that are in the msg column



from Recent Questions - Stack Overflow https://ift.tt/3cX7rUx
https://ift.tt/eA8V8J

Apache Flink : Batch Mode failing for Datastream API's with exception `IllegalStateException: Checkpointing is not allowed with sorted inputs.`

A continuation to this : Flink : Handling Keyed Streams with data older than application watermark

based on the suggestion, I have been trying to add support for Batch in the same Flink application which was using the Datastream API's.

The logic is something like this :

streamExecutionEnvironment.setRuntimeMode(RuntimeExecutionMode.BATCH);
streamExecutionEnvironment.readTextFile("fileName")
.process(process function which transforms input)
.assignTimestampsAndWatermarks(WatermarkStrategy
                .<DetectionEvent>forBoundedOutOfOrderness(orderness)
                .withTimestampAssigner(
                        (SerializableTimestampAssigner<Event>) (event, l) -> event.getEventTime()))
.keyBy(keyFunction)
.window(TumblingEventWindows(Time.of(x days))
.process(processWindowFunction);

Based on the public docs, my understanding was that i simply needed to change the source to a bounded one. However the above processing keeps on failing at the event trigger after the windowing step with the below exception :

java.lang.IllegalStateException: Checkpointing is not allowed with sorted inputs.
    at org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
    at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.init(OneInputStreamTask.java:99)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:552)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:571)
    at java.base/java.lang.Thread.run(Thread.java:829)

The input file contains the historical events for multiple keys. The data for a given key is sorted, but the overall data is not. I have also added an event at the end of each key with the timestamp = MAX_WATERMARK to indicate end of keyed Stream. I tried it for a single key as well but the processing failed with the same exception.

Note: I have not enabled checkpointing. I have also tried explicitly disabling checkpointing to no avail.

env.getCheckpointConfig().disableCheckpointing();

EDIT - 1

Adding more details : I tried changing and using FileSource to read files but still getting the same exception.

environment.fromSource(FileSource.forRecordStreamFormat(new TextLineFormat(), path).build(),
WatermarkStrategy.noWatermarks(),
"Text File")

The first process step and key splitting works. However it fails after that. I tried removing windowing and adding a simple process step but it continues to fail. There is no explicit Sink. The last process function simply updates a database.

process exception timeline Attaching images in case they help.

Is there something I'm missing ?



from Recent Questions - Stack Overflow https://ift.tt/3D0XtMB
https://ift.tt/3rcbtRo

SQL - Work center schedule list - calculate lead times [closed]

I need help how to calculate the workcenter schedule list in SQL language, I mean the start and finnish dates of the operation when I know the input parameters as Lead time operations? This is complicated by the fact that some work centers may have 1/2/3 of the work shift per day. The table shows an example of one order with input data from the database and what the output in the form of Start/ Finnish date time should look like (yellow columns). Can anyone help me with this how to write an SQL script with the desired result? I tried in several ways, but it never worked for me in general for various examples. Thanks in advance for the ideas.

table shows an example: [1]: https://i.stack.imgur.com/xFKvC.jpg

The resulting Start / Finnish date time values are listed in the "table example" link above. Here is my one MS SQL query that doesn't work properly in the calculation (calculating only with capacity one śhift per day), but you may find an error in:

CREATE TABLE temp_wcs 
(SEQN INT NOT NULL,
ORDER_NO INT NOT NULL,
PRODUCT_NO INT NOT NULL,
ORDER_QUANTITY INT NOT NULL,
OPERATION_NO INT NOT NULL,
WORK_CENTER INT NOT NULL,
WORKCENTE_SHIFT_CAPACITY INT NOT NULL,
RUN_TIME DECIMAL(10,2) NOT NULL,
SETUP_TIME DECIMAL(10,2) NOT NULL,
LEAD_TIME DECIMAL(10,4),
OPE_START_DATE_TIME DATETIME,
OPE_FINNISH_DATE_TIME DATETIME) 

GO 

INSERT INTO temp_wcs VALUES 
(1, 20210001, 10000001, 100, 5, 101, 1, 6, 60, null, null, null), 
(2, 20210001, 10000001, 100, 10, 102, 1, 4, 20, null, null, null),
(3, 20210001, 10000001, 100, 15, 103, 2, 5, 10, null, null, null),
(4, 20210001, 10000001, 100, 20, 104, 2, 1, 20, null, null, '10.12.2021 22:00:00') 

GO 

with temp as 
(select a.*,
((a.ORDER_QUANTITY * a.RUN_TIME) + SETUP_TIME) / 60 as LT_HR,
cast(((((a.ORDER_QUANTITY * a.RUN_TIME) + SETUP_TIME) / 60) / 8.0 ) as int) + ((((a.ORDER_QUANTITY * a.RUN_TIME) + SETUP_TIME) / 60) / 8.0) % 1.0 * 8 / 24 as LT
from temp_wcs a), 

t_CummLT as 
(select *,
Sum( LT ) Over (partition by ORDER_NO, PRODUCT_NO order by SEQN desc) as CummLT
from temp), 

t_LTshifts as 
(select *,
case when CummLT % 1.0 > (8.0/24.0)
     then (16.0/24.0)
     else 0 end as LTshifts
from t_CummLT), 

t_CummLTshifts as
(select *,
Sum( LTshifts) Over (partition by ORDER_NO, PRODUCT_NO order By SEQN desc) as CummLTshifts 
from t_LTshifts), 

result as 
(select a.SEQN, a.ORDER_NO, a.PRODUCT_NO, a.OPERATION_NO, a.LT_HR,
Cumm = a.CummLT + a.CummLTshifts,
a.OPE_FINNISH_DATE_TIME 
from t_CummLTshifts a 
where a.SEQN = (select max(SEQN) from temp b where b.ORDER_NO = a.ORDER_NO and b.PRODUCT_NO = a.PRODUCT_NO group by ORDER_NO, PRODUCT_NO)

union all

select b.SEQN, b.ORDER_NO, b.PRODUCT_NO, b.OPERATION_NO, b.LT_HR,
Cumm = b.CummLT + b.CummLTshifts,
FINISH = a.OPE_FINNISH_DATE_TIME - a.Cumm 
from result a, t_CummLTshifts b 
where a.ORDER_NO = b.ORDER_NO and a.PRODUCT_NO = b.PRODUCT_NO and b.SEQN = a.SEQN-1)
 
select * from result order by SEQN 


from Recent Questions - Stack Overflow https://ift.tt/3xxY33b
https://ift.tt/eA8V8J

urllib HTTP Error 400: Bad Request | Download OneDrive files from Organisation

Goal: download files within a specified folder from my Organisation's OneDrive using Python.

Following this Towards Data Science tutorial, I provide a OneDrive url, and it generates an API url.

Error:

HTTP Error 400: Bad Request

I suspect this fails because I need to pass permissions in this call from Python's end.

I have tried all combinations of Copy link permissions on OneDrive, both for entire folder and individual file access. Screenshot of Copy link options on OneDrive


Jupyter Notebook:

# pip install opendatasets

import base64
import opendatasets as od
import urllib

#ONDRIVE_URL = # folder url
ONDRIVE_URL = # file url

def create_onedrive_directdownload(onedrive_link):
    data_bytes64 = base64.b64encode(bytes(onedrive_link, 'utf-8'))
    data_bytes64_String = data_bytes64.decode('utf-8').replace('/','_').replace('+','-').rstrip("=")
    download_url = f"https://api.onedrive.com/v1.0/shares/u!{data_bytes64_String}/root/content"
    return download_url

def download(download_url):
    try:
        download = od.download(download_url)

        path_extract = '../data/gri/'
        with zipfile.ZipFile(path_extract + 'iris_database.zip', 'r') as zip_ref:
            zip_ref.extractall(path_extract)

        # shutil.rmtree(path_extract)
        os.remove(path_extract[:-1] + '.zip')

    except (urllib.error.URLError, IOError, RuntimeError) as e:
        print('download()', e)

download_url = create_onedrive_directdownload(ONEDRIVE_URL)
download_url
>>> 'https://api.onedrive.com/v1.0/shares/u!'...

download(download_url)
>>> download() HTTP Error 400: Bad Request


from Recent Questions - Stack Overflow https://ift.tt/3ri38vm
https://ift.tt/3d1XCoc

Based on the ratings given by the users for the movies, compute the similarities between all users [closed]

I have two datasets. rating and movie.

Rating:-

UserID  MovieID Rating  Timestamp
0   1   122     5.0    838985046
1   10  185     3.0    838983525
2   2   231     1.0    838983392
3   8   292     5.0    838983421
4   1   316     4.0    838983392
5   5   329     3.0    838983392
6   3   355     2.0    838984474
7   7   356     1.0    838983653
8   6   362     5.0    838984885
9   4   364     2.5    838983707

Movie:-

MovieID Title   Genres
0   1   Toy Story (1995)    Adventure|Animation|Children|Comedy|Fantasy
1   2   Jumanji (1995)  Adventure|Children|Fantasy
2   3   Grumpier Old Men (1995) Comedy|Romance
3   4   Waiting to Exhale (1995)    Comedy|Drama|Romance
4   5   Father of the Bride Part II (1995)  Comedy
5   6   Heat (1995) Action|Crime|Thriller
6   7   Sabrina (1995)  Comedy|Romance
7   8   Tom and Huck (1995) Adventure|Children
8   9   Sudden Death (1995) Action
9   10  GoldenEye (1995)    Action|Adventure|Thriller

now I need to find out similaries between all the users based on the rating given by them.

Below is what I have done so far:-

import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity

rating= "ratings.dat"
names_r=['UserID','MovieID','Rating','Timestamp']
ratings = pd.read_csv(rating, names=names_r, sep = '::')

movie = "movies.dat"
names_m=['MovieID','Title','Genres']
movies = pd.read_csv(movie, names=names_m, sep = '::')

merged_df=ratings.merge(movies, on='MovieID')

merged_df.drop('Timestamp', axis=1, inplace=True)

after that I am confused how to calculate similaries between all the users.



from Recent Questions - Stack Overflow https://ift.tt/3l7ukce
https://ift.tt/eA8V8J

Maximum function in Gurobi API C++

I need to add the following constraint to my model written in C++ to call Gurobi

max{x_1-x_2, 0} >= 1

I have found addGenConstrMax, but this adds the maximum constraints directly, and in my case I need the maximum to be greater than 1.



from Recent Questions - Stack Overflow https://ift.tt/3lgeX1A
https://ift.tt/eA8V8J

Change a value to true on liquid Shopify overlay header

I'm trying to set the overlay header on some pages where the overlay header is not a standard setting on my Motion theme of my Shopify store.

This is the store link: https://hatproof.com/

I premise that I'm not very good on liquid code, and I'm asking here for help.

This are the original code lines:

assign template_name = template | replace: '.', ' ' | truncatewords: 2, '' | handle
assign sticky_header = false
  assign overlay_header = false

  if section.settings.header_sticky
    assign sticky_header = true
  endif

  if template_name == 'index' and section.settings.sticky_index
    assign overlay_header = true
  endif
  if template_name == 'collection' and collection.image and section.settings.sticky_collection
    assign overlay_header = true
  endif 

I've try to edit adding this on the bottom:

if template_name == 'page.chisiamo'
    assign overlay_header = true
  endif 

But the "overlay_header" value of the page https://hatproof.com/pages/nuovo-chi-siamo ('page.chisiamo' template name) hasn't changed to "true".

I've also try to edit the variable name in 'page', 'page chisiamo' but never worked.

This is how the value is used:

<div data-section-id="" data-section-type="header">
  <div id="HeaderWrapper" class="header-wrapper">

What can I do to change the "overlay_header" value to true of some specific pages?



from Recent Questions - Stack Overflow https://ift.tt/3xuiIVK
https://ift.tt/eA8V8J

How to select next MAX() date in a partition without getting duplicates?

My goal is to select the closest document to each invoice with these conditions:

  1. DocumentDate must be before the InvoiceDate
  2. Any DocumentDate cannot be selected more than once (ordered by InvoiceDate DESC)

My first table contains clients and all their invoices. My second table contains a list of document ids and corresponding dates for each client. Please see the queries I have tried below and share your suggestions!

https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=9c49ec4ea9b22bbd4a2eb816407d708f

Data:

Invoices:

clientid    invoiceid   invoicedate
18924       2819        2019-01-17
18924       4524        2019-01-15
18924       9897        2018-12-31
18924       1591        2018-12-25
27113       5808        2020-10-16
27113       4359        2020-10-11
27113       3405        2020-10-01
27113       9889        2020-09-21
27113       1976        2020-09-19

Documents:

clientid    documentid  documentdate
18924       2851        2019-01-27
18924       2500        2019-01-25
18924       9979        2019-01-12
18924       8913        2019-01-06
18924       3363        2019-01-02
27113       9533        2020-10-14
27113       9525        2020-10-12
27113       9521        2020-09-25
27113       8680        2020-09-11
27113       3504        2020-09-03
27113       5188        2020-08-17

Desired Output:

clientid    invoiceid   invoicedate     documentid  documentdate
18924       2819        2019-01-17      9979        2019-01-12
18924       4524        2019-01-15      8913        2019-01-06
18924       9897        2018-12-31
18924       1591        2018-12-25
27113       5808        2020-10-16      9533        2020-10-14
27113       4359        2020-10-11      9521        2020-09-25
27113       3405        2020-10-01      8680        2020-09-11
27113       9889        2020-09-21      3504        2020-09-03
27113       1976        2020-09-19      5188        2020-08-17

What I've Tried:

When I use this query, it doesn't constrain the documentdate to being chosen only once.

SELECT
    invoice.clientid,
    invoiceid,
    invoicedate,
    MAX(documentdate) 'documentdate'
FROM Invoice
    LEFT JOIN Document ON Invoice.clientid = Document.clientid
        AND Invoice.invoicedate > Document.documentdate
GROUP BY invoice.clientid,
invoiceid, 
invoicedate
ORDER BY invoice.clientid, 
invoicedate desc
clientid    invoiceid   invoicedate documentdate
18924       2819        2019-01-17  2019-01-12
18924       4524        2019-01-15  2019-01-12
18924       9897        2018-12-31  
18924       1591        2018-12-25  
27113       5808        2020-10-16  2020-10-14
27113       4359        2020-10-11  2020-09-25
27113       3405        2020-10-01  2020-09-25
27113       9889        2020-09-21  2020-09-11
27113       1976        2020-09-19  2020-09-11

This query suggested by user @eshirvana only works as long as every documentdate is chosen.

WITH
    data
    AS
    (
        SElECT
            Invoice.clientid 'clientid',
            invoiceid,
            invoicedate,
            dense_rank() over (partition by Invoice.clientid order by invoicedate desc) 'InvoiceRank',
            documentid,
            documentdate,
            dense_rank() over (partition by Invoice.clientid order by documentdate desc) 'DocumentRank'
        FROM Invoice
            LEFT JOIN Document ON Invoice.clientid = Document.clientid
                AND Invoice.invoicedate > Document.documentdate
    )


SELECT *
FROM data
WHERE InvoiceRank = DocumentRank 
or Documentid is null
ORDER BY clientid , InvoiceRank
clientid    invoiceid   invoicedate InvoiceRank documentid  documentdate    DocumentRank
18924       2819        2019-01-17  1           9979        2019-01-12      1
18924       4524        2019-01-15  2           8913        2019-01-06      2
18924       9897        2018-12-31  3                                       4   
18924       1591        2018-12-25  4                                       4   
27113       5808        2020-10-16  1           9533        2020-10-14      1
27113       3405        2020-10-01  3           9521        2020-09-25      3
27113       9889        2020-09-21  4           8680        2020-09-11      4
27113       1976        2020-09-19  5           3504        2020-09-03      5


from Recent Questions - Stack Overflow https://ift.tt/31aOJWL
https://ift.tt/eA8V8J

How to copy a character from basic string into a vector string?

enter image description here //Defining the class

class Hangman
{
    private:
        vector<string> dictionary;          //stores all the words
        vector<string> secretWord;          //stores the secret word
        vector<string> misses;              //keeps record of wrong guesses
        vector<string> displayVector;           //Stores "_"
        string originalWord;                //stores a copy of secret word to display at the 
end of game.
        bool gameOver = false;                      //Flag to check if the player lost or 
still in the game.
        int totalAttempts;
            
    public:                                 
    void selectRandWord();                      
};

//This is the function i am having problem in.

void Hangman::selectRandWord()
{
    secretWord.clear();

//word is a basic string that stores a random word. lets say "Hello World".

    string word;
    srand(time(NULL)); 
    int random = (rand() % dictionary.size()) + 1;

//I store a random word from vector to word.

    word = dictionary[random];
    transform(word.begin(), word.end(), word.begin(), ::tolower);           
    originalWord = word;
    for (int index = 0; index < word.length(); index++) 
    { 

//This line has the error: [Error] invalid user-defined conversion from 'char' to 'std::vectorstd::basic_string<char >::value_type&& {aka std::basic_string&&}' [-fpermissive]

//What I am trying to do is take each character from word(for example: "H") and push it back into the vector string secretWord.

        secretWord.push_back(word[index]); 
    } 
}


from Recent Questions - Stack Overflow https://ift.tt/3o2xq36
https://ift.tt/3D0S34d

TCPDF unable to output file by saving. Path on Mac?

I checked some similar questions here, but i didn't got an answer, solving my problem.

I use TCPDF to generate a PDF with PHP.

When I use the $pdf->Output($file_total, 'I'); it's all good and the file is shown in Browser.

If I use the save to localhost option $pdf->Output($file_total, 'F'); I get this error:

"failed to open stream: Permission denied in /opt/lampp/htdocs/project/tcpdf_min/include/tcpdf_static.php on line 1821" "TCPDF ERROR: Unable to create output file"

The Path I set for saving the PDF file is: "/opt/lampp/htdocs/project/files/2021"

What am I doing wrong? is my path wrong?

The forced saving $pdf->Output($file_total, 'D'); works just fine, so i must doing something wrong with the path on mac... please help!

Thanks



from Recent Questions - Stack Overflow https://ift.tt/3CWHvCX
https://ift.tt/eA8V8J

2021-11-28

Fetch list of transactions from json

In my task I need to fetch some data.

I made UserModelClass where I put some data classes:

      "id": "1",
      "IBAN": "HR123456789012345678901",
      "amount": "2.523,00",
      "currency": "HRK",
      "transactions": [
        {
          "id": "1",
          "date": "25.01.2016.",
          "description": "Uplata 1",
          "amount": "15,00 HRK"
        },
        {
          "id": "2",
          "date": "17.02.2016.",
          "description": "Uplata 2",
          "amount": "50,00 HRK",
          "type": "GSM VOUCHER"
        }]

This is my code:

data class UserModelClass (
    val user_id:String,
    val accounts: List<AccountsList>
)

data class AccountsList(
    val id: String,
    val IBAN: String,
    val amount: String,
    val currency: String,
    val transactions: List<transactionsList>

)

data class transactionsList(
    val id: String,
    val date: String,
    val description: String,
    val amount: String
    //
)

Problem is that some transactions have type and some don't have it. I tried to make one more data class transactionsList where I put val type but there can be only one data class with the same name.



from Recent Questions - Stack Overflow https://ift.tt/3HZJdr1
https://ift.tt/eA8V8J

Using list comprehension as condition for if else statement

I have the following code which works well

list = ["age", "test=53345", "anotherentry", "abc"]

val = [s for s in list if "test" in s]
if val != " ":
    print(val)

But what I'm trying to do is using the list comprehension as condition for an if else statement as I need to proof more than one word for occurence. Knowing it will not work, I'm searching for something like this:

PSEUDOCODE
if (is True = [s for s in list if "test" in s])
print(s)
elif (is True = [l for l in list if "anotherentry" in l])
print(l)
else:
print("None of the searched words found")


from Recent Questions - Stack Overflow https://ift.tt/313c5h6
https://ift.tt/eA8V8J

how can I fix this error when writing npm run start in vs code terminal?

I want to start using react , so I first installed node in my windows via command prompt , and then installed react using npx create-react-app ... , and later in vs code terminal when I write npm run start I get the following error, I searched a lot but didnt find a way.

npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path C:\Users\LENOVO\Desktop\React/package.json
npm ERR! errno -4058
npm ERR! enoent ENOENT: no such file or directory, open 'C:\Users\LENOVO\Desktop\React\package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\LENOVO\AppData\Local\npm-cache\_logs\2021-11-27T19_22_54_184Z-debug.log


from Recent Questions - Stack Overflow https://ift.tt/314LKQ0
https://ift.tt/eA8V8J

Why is validation accuracy stuck at 33% w/ relu activation and 100% at first epoch w/ softmax?

TLDR: When my tensorflow model’s last dense layer uses relu activation, val_accuracy is stuck at about 0.3396 (from epoch 1-10). When I modify the last dense layer’s activation to softmax, val_Accuracy is 1.00 starting at the first epoch. Why is val_accuracy so high at the first epoch? My model incorrectly predicts all new images.

My attempt to fix incorrectly high val_accuracy: I’ve ensured that each class has the same amount of data. (Each has about 500 images). I know overfitting can be a problem. I have added a dropout layer to try to resolve this.

I would appreciate your help.

This is my model:

def model():
  model_input = tf.keras.layers.Input(shape=(h, w, 3)) 
  x = tf.keras.layers.Rescaling(rescale_factor)(model_input) 
  x = tf.keras.layers.Conv2D(16, 3, activation='relu',padding='same')(x)
  x = tf.keras.layers.Dropout(.5)(x)
  x = tf.keras.layers.MaxPooling2D()(x) 
  x = tf.keras.layers.Flatten()(x)
  x = tf.keras.layers.Dense(128, activation='relu')(x)
  outputs = tf.keras.layers.Dense(num_classes, activation = 'relu')(x)

These are my training results. Why is val_accuracy stuck at 0.3396?

Epoch 1/10
41/41 [==============================] - 3s 62ms/step - loss: 6.1832 - accuracy: 0.3310 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 2/10
41/41 [==============================] - 2s 56ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 3/10
41/41 [==============================] - 2s 57ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 4/10
41/41 [==============================] - 2s 57ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 5/10
41/41 [==============================] - 2s 57ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 6/10
41/41 [==============================] - 2s 57ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 7/10
41/41 [==============================] - 2s 57ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 8/10
41/41 [==============================] - 2s 57ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 9/10
41/41 [==============================] - 2s 58ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396
Epoch 10/10
41/41 [==============================] - 2s 57ms/step - loss: 6.1537 - accuracy: 0.3326 - val_loss: 5.9654 - val_accuracy: 0.3396

When I change the last line to outputs = tf.keras.layers.Dense(num_classes, activation = 'softmax')(x) my training results are this. Why is val_accuracy different for softmax v. rely and why is it 1.000 val accuracy at first epoch?

41/41 [==============================] - 3s 47ms/step - loss: 0.1347 - accuracy: 0.9735 - val_loss: 1.2451e-05 - val_accuracy: 1.0000
Epoch 2/10
41/41 [==============================] - 2s 43ms/step - loss: 2.1454e-06 - accuracy: 1.0000 - val_loss: 1.5783e-06 - val_accuracy: 1.0000
Epoch 3/10
41/41 [==============================] - 2s 43ms/step - loss: 1.2228e-06 - accuracy: 1.0000 - val_loss: 1.2556e-06 - val_accuracy: 1.0000
Epoch 4/10
41/41 [==============================] - 2s 43ms/step - loss: 1.0604e-06 - accuracy: 1.0000 - val_loss: 1.1204e-06 - val_accuracy: 1.0000
Epoch 5/10
41/41 [==============================] - 2s 44ms/step - loss: 9.8337e-07 - accuracy: 1.0000 - val_loss: 1.0190e-06 - val_accuracy: 1.0000
Epoch 6/10
41/41 [==============================] - 2s 43ms/step - loss: 9.1662e-07 - accuracy: 1.0000 - val_loss: 9.6480e-07 - val_accuracy: 1.0000
Epoch 7/10
41/41 [==============================] - 2s 43ms/step - loss: 8.4151e-07 - accuracy: 1.0000 - val_loss: 8.8682e-07 - val_accuracy: 1.0000
Epoch 8/10
41/41 [==============================] - 2s 43ms/step - loss: 7.7699e-07 - accuracy: 1.0000 - val_loss: 8.3223e-07 - val_accuracy: 1.0000
Epoch 9/10
41/41 [==============================] - 2s 43ms/step - loss: 7.1349e-07 - accuracy: 1.0000 - val_loss: 7.2788e-07 - val_accuracy: 1.0000
Epoch 10/10
41/41 [==============================] - 2s 43ms/step - loss: 6.6057e-07 - accuracy: 1.0000 - val_loss: 6.6252e-07 - val_accuracy: 1.0000

Edit: Model.fit and model.compile parameters:

model = tf.keras.Model(model_input, outputs)
  
 model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=['accuracy'])
  
 
hist = model.fit(
  train_ds,
  validation_data=val_ds,
  epochs=10
)


from Recent Questions - Stack Overflow https://ift.tt/3E368PK
https://ift.tt/eA8V8J

How to resample a df with multiple columns

I have a minute data for multiple requests. I would like to resample it to hourly and groupby the request so that i can get the total number of requests per hour

this is how the data looks like

    | RequestDate | Request | Count |
    | 2021-11-24 22:09:00 | Request 1 | 10 |
    | 2021-11-24 22:09:00 | Request 3 | 1 |
    | 2021-11-24 22:09:00 | Request 2 | 4 |
    | 2021-11-24 22:09:00 | Request 4 | 5 |
    | 2021-11-24 22:10:00 | Request 1 | 4 |
    | 2021-11-24 22:10:00 | Request 2 | 0 |
    | 2021-11-24 22:10:00 | Request 3 | 6 |
    | 2021-11-24 22:10:00 | Request 4 | 5 |
    | 2021-11-24 22:10:00 | Request 5 | 1 |

Output:

    | RequestDate | Request | Count |
    | 2021-11-24 22:00:00 | Request 1 | 14 |
    | 2021-11-24 22:00:00 | Request 2 | 4 |
    | 2021-11-24 22:00:00 | Request 3 | 7 |
    | 2021-11-24 22:00:00 | Request 4 | 10 |
    | 2021-11-24 22:00:00 | Request 5 | 1 |

I tried this but ended in an error:

    df_groupby = df.groupby(by=[df["RequestDate"].resample('h'), "Request"])
    
    df_groupby["Request"]
    
    KeyError: 'RequestDate'

df with test data can be created as follows

df = pd.read_csv("test_data.csv")

test_data.csv

RequestDate,Request,RequestCount
2021-11-18 00:00:00,Request1,4
2022-11-18 00:00:00,Request2,4
2022-11-18 00:00:00,Request3,4
2022-11-18 00:00:00,/Request4,4
2022-11-18 00:00:00,Request5,4
2021-11-18 00:01:00,Request1,4
2021-11-18 00:02:00,Request1,2
2021-11-18 00:03:00,Request2,3
2022-11-18 00:04:00,Request3,4
2021-11-18 00:05:00,Request1,4


from Recent Questions - Stack Overflow https://ift.tt/30ZiP01
https://ift.tt/eA8V8J

Copy to Clipboard element's text value using jQuery

Edit: I do not try to copy textarea or input value so it is not a duplicate question as suggested.

Can I use copy to clipboard for an element's text value?

Such as I want to copy terra1

<span id="terra-wallet-address">terra1</span>

And jQuery:

  jQuery('#terra-wallet-address').focus();
  jQuery('#terra-wallet-address').select();
  document.execCommand('copy');
  jQuery('.copied').text("Copied to clipboard").show().fadeOut(1200);

I also tried .val() and .text() but did not work.

Thank you.



from Recent Questions - Stack Overflow https://ift.tt/3nZ2aSN
https://ift.tt/eA8V8J

how to use values with sed in shell scripting?

i am trying te write a shell script in alphametic ,

i have 5 parameters like this

$alphametic 5790813 BEAR RARE ERE RHYME

to get

ABEHMRY -> 5790813

i tried this :

  #!/bin/bash
    echo "$2 $3 $4 $5" | sed  's/ //g ' | sed 's/./&\n/g' | sort -n |  sed '/^$/d' | uniq -i > testing
    paste -sd ''  testing  > testing2 
    sed "s|^\(.*\)$|\1 -> ${1}|" testing2 

but i get error (with the last command sed), i dont know where is the problem .



from Recent Questions - Stack Overflow https://ift.tt/3xubxNj
https://ift.tt/eA8V8J

how to make remove text with animation from navbar

Hello I want to make this navbar for phones which will cut out the text when the user scrolls down but I'm very confused on how can I do it because if I just move navbar up it will start cutting out my images first can some one give me a clue on how can I solve this problem here is my code (note I use tailwindcss)

Code:

<script>
    //checks if on phone
    let onPhone;
    function checkIfOnPhone() {
        let width = window.innerWidth > 0 ? window.innerWidth : screen.width;
        if (width < 426) {
            onPhone = true;
        } else {
            onPhone = false;
        }
    }
    checkIfOnPhone();
    addEventListener("resize", checkIfOnPhone);
    //user scroling page
    let phoneNavbarFull;
    document.addEventListener("scroll", scrollListener);
    let topPxSize;
    let scrollTop;
    function scrollListener() {
        scrollTop =
            window.pageYOffset ||
            (
                document.documentElement ||
                document.body.parentNode ||
                document.body
            ).scrollTop;

        if (onPhone && scrollTop < 20) {
            topPxSize = -scrollTop + "px";
        } else {
            topPxSize = "-20px";
        }
        console.log(scrollTop);
    }
    scrollListener();
</script>

<main>
    {#if onPhone}
        <nav class="navbar-background" style="--topPxSize: {topPxSize}">
            <div class="navbar">
                <ul class="flex justify-between">
                    <li />
                    <li>
                        <button class="flex flex-col items-center">
                            <img src="./imgs/Home_light.svg" alt="Home" />
                            <p class="nav-btn-mobile">Home</p>
                        </button>
                    </li>

                    <li>
                        <button class="flex flex-col items-center"
                            ><img
                                src="./imgs/Desk_alt_light.svg"
                                alt="Portfolio"
                            />
                            <p class="nav-btn-mobile">Portfolio</p></button
                        >
                    </li>
                    <li>
                        <button class="flex flex-col items-center"
                            ><img src="./imgs/Phone_light.svg" alt="Contact" />
                            <p class="nav-btn-mobile">Contact</p></button
                        >
                    </li>

                    <li />
                </ul>
            </div>
        </nav>
        <div class="pb-60 mb-60" />
    {:else}
        <nav class="navbar-background">
            <ul class="flex justify-between">
                <li class="nav-btn"><span class="font-bold">LAY</span>CODE</li>
                <ul class="flex">
                    <li class="nav-btn">Home</li>
                    <li class="nav-btn">Portfolio</li>
                    <li class="nav-btn">Contact</li>
                </ul>
            </ul>
        </nav>
    {/if}
</main>

<style lang="postcss" global>
    @tailwind base;
    @tailwind components;
    @tailwind utilities;

    @layer utilities {
        .navbar {
            position: fixed;
            top: 0;
            width: 100%;
            z-index: 100;
        }
        .navbar-background {
            background-color: #000;
            position: fixed;
            top: var(--topPxSize);
            width: 100%;
            height: 45px;
            z-index: 100;
        }
        .nav-btn {
            @apply text-white mr-6;
        }
        .nav-btn-mobile {
            @apply text-white text-sm text-center;
        }
    }
</style>

and here is how it should look more or less.

enter image description here



from Recent Questions - Stack Overflow https://ift.tt/316tDc3
https://ift.tt/3HWOhfL

XAML WPF DataGrid: reduce columns width to fit its content while scrolling except one

Here is the solution to automatically reduce DataGrid columns width during scrolling. I need a slightly modified version where last column fills all row width left after other columns.

The old solution:

private void OnLoadingRow(object sender, DataGridRowEventArgs e)
{
    if (sender is not DataGrid dg) return;
    foreach (var c in dg.Columns) c.Width = 0;
    e.Row.UpdateLayout();
    foreach (var c in dg.Columns) c.Width = DataGridLength.Auto;
}

If dg.Columns.Last().Width = new DataGridLength(1, DataGridLengthUnitType.Star); added at the end of the method last column not respects other columns Auto size and forces its size to 20px (see the pic). enter image description here



from Recent Questions - Stack Overflow https://ift.tt/3rcmWQO
https://ift.tt/3rl40zD

Alignment problem with a subfigure histogram plot

I am facing an alignment problem with a subfigure plots, that I do not have any clue how to fix it, and I have tried to make theme as image, but I have the same problem. The main problem is shown in the figure below. As it shows, the graphs are not aligned, or they are not of the same size (title/caption as well). The plots are made via pgfplots and subfig packges. The code used to generate them are in the end of this text. Any help is welcome!!!

Edit: The main error of the standalone class as figure

\documentclass[border=10pt]{standalone}

enter image description here

\documentclass[border=10pt]{standalone}
\documentclass{article}
\usepackage{verbatim}
\usepackage[utf8]{inputenc}
\usepackage{filecontents}
\usepackage{tikz}
\usepackage{tkz-tab}
\usepackage{caption}
\usepackage{latexsym}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{subcaption}
\usepackage{pgfplotstable}
\usepackage{pgfplots}
\pgfplotsset{width=7cm,compat=1.8}
\renewcommand*{\familydefault}{\sfdefault}
%\usepackage{sfmath}
\begin{document}
\begin{figure}
\centering %
  \begin{subfigure}[b]{0.32\textwidth}
        \centering
        \hspace*{\fill}%
        \resizebox{\linewidth}{!}{
           
\begin{tikzpicture}
  \centering
  \begin{axis}[
        ybar,
        title={Manufacture 1},
        set layers,axis background,
        grid=major,
        height=6cm, width=8.5cm,
        bar width=0.14cm,
        set layers,
        axis background,
        ymajorgrids, tick align=inside,
       major grid  style={dashed,draw=gray!15}, 
        ymin=1000, ymax=13000,
        enlarge x limits={abs=0.1cm}
        axis x line*=bottom,
        max space between ticks=14pt,
         xticklabel style={rotate=90,yshift=-0.05cm,xshift=0,color=black},
        tickwidth=0pt,
        enlarge x limits=true,
        ylabel={Number of Sale },
        symbolic x coords={
     BMW, VW, Honda
          },
       xtick=data,
       enlarge x limits={abs=0.01},
       enlarge x limits=0.05,
        enlarge y limits=0.05,
       nodes near coords={
       }
    ]
    \addplot [draw=none, fill=blue] coordinates {
      (BMW,  12766) 
       (VW,  12766) 
       (Honda,  12766) 
    
      };

   \addplot [draw=none,fill=red] coordinates {
     (BMW,  10901)
     (VW, 6682)    
    (Honda, 9468)
  
  
     
     };
    
  
   \addplot [draw=none, fill=green] coordinates {
     (BMW, 8679)
     (VW, 3659)    
     (Honda,  7844)
    
        
      };
    
  \legend{car 1, car 2, car 3}
  \end{axis}
  \end{tikzpicture}
 }
%     \caption{Subfigure A}
        \label{fig:subfig8}
    \end{subfigure}

      \begin{subfigure}[b]{0.32\textwidth}
    \centering
        \resizebox{\linewidth}{!}{
           \begin{tikzpicture}
  \centering
  \begin{axis}[
        ybar,
        title={(b) Manufacture 2 },
        set layers,axis background,
        grid=major,
        height=6cm, width=8.5cm,
        max space between ticks=40pt,
        ymin=0,
        bar width=0.14cm,
        set layers,
        axis background,
        ymajorgrids, tick align=inside,
       major grid  style={dashed,draw=gray!15}, 
        ymin=1000, ymax=6000,
        enlarge x limits={abs=0.1cm}
        axis x line*=bottom,
        y tick label style={scaled ticks=base 10:-4},
           xticklabel style={rotate=90,yshift=0.1cm,xshift=0,color=black},
        tickwidth=0pt,
        enlarge x limits=true,
        symbolic x coords={BMW, VW, Honda 
          },
       xtick=data,
       enlarge x limits={abs=0.01},
       enlarge x limits=0.05,
        enlarge y limits=0.05,
       nodes near coords={
       }
    ]
    \addplot [draw=none, fill=blue] coordinates {
     (BMW, 5153)
 (VW, 4522)
 (Honda, 4522)

      };
 
  
    
  \legend{Electric Car 2060 }
  \end{axis}
  \end{tikzpicture}
        }
           \caption{Subfigure B}
        \label{fig:subfig9}
    \end{subfigure}

\begin{subfigure}[b]{0.32\textwidth}
        \centering
        \hspace*{\fill}%
        \resizebox{\linewidth}{!}{
           
\begin{tikzpicture}
  \centering
  \begin{axis}[
        ybar,
        title={Manufacture 3},
        set layers,axis background,
        grid=major,
        height=6cm, width=8.5cm,
        ymin=0,
        bar width=0.14cm,
        set layers,
        axis background,
        ymajorgrids, tick align=inside,
       major grid  style={dashed,draw=gray!15}, 
        ymin=1000, ymax=21111,
        enlarge x limits={abs=0.1cm}
        axis x line*=bottom,
           xticklabel style={rotate=90,yshift=-0.05cm,xshift=0,color=black},
        tickwidth=0pt,
        enlarge x limits=true,
        ylabel={},
        symbolic x coords={
        BMW, VW, Honda
          },
       xtick=data,
       enlarge x limits={abs=0.01},
       enlarge x limits=0.05,
        enlarge y limits=0.05,
       nodes near coords={
       }
    ]
    \addplot [draw=none, fill=red] coordinates {
(BMW, 19289)
 (VW, 20289)
 (Honda, 20289)
      };
 
   \addplot [draw=none,fill=blue] coordinates {
    (BMW,14653) 
 (VW, 20489)
      (VW, 20289)
       (Honda, 20289)
     };
    

  
   \addplot [draw=none, fill=green] coordinates {
   (BMW,14653) 
 (VW, 20489)
     
 
        
      };
    
 \legend{Car 1, Car 2, Car 3}
  \end{axis}
  \end{tikzpicture}
  }
        \caption{Subfigure C}
        \label{fig:subfig8}
    \end{subfigure}
\end{figure}
\end{document}


from Recent Questions - Stack Overflow https://ift.tt/3p7Y1LK
https://ift.tt/3CZANw3

OnDelete(DeleteBehavior.Cascade) may cause cycles or multiple cascade paths on

I have a Products table and a Categories table. I'm trying to make a many-to-many relationship between Product and Category. So I have a Table Called: ProductCategories - I followed the official doc:

https://docs.microsoft.com/en-us/ef/core/modeling/relationships?tabs=fluent-api%2Cfluent-api-composite-key%2Csimple-key

public class ProductCategory
{
    public Guid ProductId { get; set; }
    public Product  Product { get; set; }
    
    public Guid CategoryId { get; set; }
    public Category  Category { get; set; }
}

 public class Product
{
     public Product()
    {
        ProductFiles = new Collection<ProductFiles>();
    }
    
    public Company Company { get; set; }
    public Guid CompanyId { get; set; }
    
    public string Name { get; set; }
    public string Description { get; set; }
    public decimal SalePrice { get; set; }
    public decimal? CostPrice { get; set; }
    public int VatPercentage { get; set; } = 25;

    public ICollection<ProductFiles> ProductFiles { get; set; }
    
    public ICollection<ProductCategory> Categories { get; set; }
}

public class Category
{
   public string Title { get; set; }
    
    public Guid CompanyId { get; set; }
    public Company Company { get; set; }
    public Guid? ParentCategoryId { get; set; }
    public virtual ICollection<Category> SubCategories { get; set; }
    public virtual Category ParentCategory { get; set; }
    
    public bool Visible { get; set; } = true;

    public int SortOrder { get; set; } = 1;
    
    public ICollection<ProductCategory> Products { get; set; }
}

And in modelbuilder I've specified the relations

        builder.Entity<ProductCategory>() 
            .HasKey(x => new {x.ProductId, x.CategoryId});
        
        builder.Entity<ProductCategory>()
            .HasOne<Product>(pc => pc.Product)
            .WithMany(p => p.Categories)
            .HasForeignKey(p => p.ProductId);

        builder.Entity<ProductCategory>()
            .HasOne<Category>(p => p.Category)
            .WithMany(p => p.Products)
            .HasForeignKey(p => p.CategoryId);

The problem is that when I try to Update database I get an error "may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint or index."

What I want is when someone in the system delete a category/product, then it should delete the record in the ProductCategories - when I try to add onDelete no action, then I cannot delete the category/product without their relationship in ProductsCategory table.

Any suggestion on how to solve this the best way?



from Recent Questions - Stack Overflow https://ift.tt/3I9oHV8
https://ift.tt/eA8V8J